Jan 26 20:55:09 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 20:55:09 crc restorecon[4684]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 20:55:09 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 20:55:10 crc restorecon[4684]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 20:55:10 crc restorecon[4684]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 20:55:10 crc kubenswrapper[4899]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 20:55:10 crc kubenswrapper[4899]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 20:55:10 crc kubenswrapper[4899]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 20:55:10 crc kubenswrapper[4899]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 20:55:10 crc kubenswrapper[4899]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 20:55:10 crc kubenswrapper[4899]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.750271 4899 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754278 4899 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754305 4899 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754311 4899 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754317 4899 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754323 4899 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754328 4899 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754333 4899 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754339 4899 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754347 4899 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754354 4899 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754362 4899 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754369 4899 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754376 4899 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754383 4899 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754390 4899 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754395 4899 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754401 4899 feature_gate.go:330] unrecognized feature gate: Example Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754406 4899 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754412 4899 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754417 4899 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754436 4899 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754442 4899 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754447 4899 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754452 4899 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754457 4899 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754463 4899 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754471 4899 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754477 4899 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754483 4899 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754489 4899 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754495 4899 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754500 4899 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754506 4899 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754511 4899 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754516 4899 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754521 4899 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754527 4899 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754533 4899 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754538 4899 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754544 4899 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754549 4899 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754556 4899 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754563 4899 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754568 4899 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754573 4899 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754577 4899 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754583 4899 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754588 4899 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754592 4899 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754597 4899 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754601 4899 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754605 4899 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754610 4899 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754615 4899 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754620 4899 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754624 4899 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754628 4899 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754632 4899 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754637 4899 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754643 4899 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754647 4899 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754652 4899 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754656 4899 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754660 4899 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754665 4899 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754669 4899 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754674 4899 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754680 4899 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754686 4899 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754691 4899 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.754696 4899 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754806 4899 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754819 4899 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754830 4899 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754838 4899 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754847 4899 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754854 4899 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754862 4899 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754870 4899 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754876 4899 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754882 4899 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754888 4899 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754895 4899 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754901 4899 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754906 4899 flags.go:64] FLAG: --cgroup-root="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754912 4899 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754917 4899 flags.go:64] FLAG: --client-ca-file="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754926 4899 flags.go:64] FLAG: --cloud-config="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754952 4899 flags.go:64] FLAG: --cloud-provider="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754958 4899 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754965 4899 flags.go:64] FLAG: --cluster-domain="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754971 4899 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754976 4899 flags.go:64] FLAG: --config-dir="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754982 4899 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754988 4899 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.754996 4899 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755002 4899 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755007 4899 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755014 4899 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755020 4899 flags.go:64] FLAG: --contention-profiling="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755025 4899 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755030 4899 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755036 4899 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755041 4899 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755049 4899 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755054 4899 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755059 4899 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755067 4899 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755074 4899 flags.go:64] FLAG: --enable-server="true" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755081 4899 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755089 4899 flags.go:64] FLAG: --event-burst="100" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755095 4899 flags.go:64] FLAG: --event-qps="50" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755101 4899 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755106 4899 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755112 4899 flags.go:64] FLAG: --eviction-hard="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755120 4899 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755125 4899 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755130 4899 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755135 4899 flags.go:64] FLAG: --eviction-soft="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755140 4899 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755145 4899 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755150 4899 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755155 4899 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755162 4899 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755167 4899 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755172 4899 flags.go:64] FLAG: --feature-gates="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755179 4899 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755184 4899 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755189 4899 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755195 4899 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755200 4899 flags.go:64] FLAG: --healthz-port="10248" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755205 4899 flags.go:64] FLAG: --help="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755211 4899 flags.go:64] FLAG: --hostname-override="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755216 4899 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755221 4899 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755226 4899 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755231 4899 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755236 4899 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755242 4899 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755248 4899 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755254 4899 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755260 4899 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755266 4899 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755271 4899 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755276 4899 flags.go:64] FLAG: --kube-reserved="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755282 4899 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755287 4899 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755292 4899 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755298 4899 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755303 4899 flags.go:64] FLAG: --lock-file="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755308 4899 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755313 4899 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755319 4899 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755327 4899 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755332 4899 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755337 4899 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755342 4899 flags.go:64] FLAG: --logging-format="text" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755347 4899 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755353 4899 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755358 4899 flags.go:64] FLAG: --manifest-url="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755362 4899 flags.go:64] FLAG: --manifest-url-header="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755369 4899 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755375 4899 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755381 4899 flags.go:64] FLAG: --max-pods="110" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755386 4899 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755391 4899 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755396 4899 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755401 4899 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755407 4899 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755412 4899 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755419 4899 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755432 4899 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755438 4899 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755444 4899 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755450 4899 flags.go:64] FLAG: --pod-cidr="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755455 4899 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755465 4899 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755470 4899 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755475 4899 flags.go:64] FLAG: --pods-per-core="0" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755481 4899 flags.go:64] FLAG: --port="10250" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755486 4899 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755492 4899 flags.go:64] FLAG: --provider-id="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755497 4899 flags.go:64] FLAG: --qos-reserved="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755502 4899 flags.go:64] FLAG: --read-only-port="10255" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755508 4899 flags.go:64] FLAG: --register-node="true" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755513 4899 flags.go:64] FLAG: --register-schedulable="true" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755518 4899 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755528 4899 flags.go:64] FLAG: --registry-burst="10" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755533 4899 flags.go:64] FLAG: --registry-qps="5" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755538 4899 flags.go:64] FLAG: --reserved-cpus="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755543 4899 flags.go:64] FLAG: --reserved-memory="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755550 4899 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755555 4899 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755561 4899 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755566 4899 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755572 4899 flags.go:64] FLAG: --runonce="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755577 4899 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755583 4899 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755589 4899 flags.go:64] FLAG: --seccomp-default="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755594 4899 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755599 4899 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755604 4899 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755609 4899 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755616 4899 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755621 4899 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755626 4899 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755632 4899 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755637 4899 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755642 4899 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755647 4899 flags.go:64] FLAG: --system-cgroups="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755652 4899 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755660 4899 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755665 4899 flags.go:64] FLAG: --tls-cert-file="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755670 4899 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755677 4899 flags.go:64] FLAG: --tls-min-version="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755682 4899 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755686 4899 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755693 4899 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755698 4899 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755704 4899 flags.go:64] FLAG: --v="2" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755711 4899 flags.go:64] FLAG: --version="false" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755718 4899 flags.go:64] FLAG: --vmodule="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755724 4899 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.755730 4899 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755873 4899 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755879 4899 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755886 4899 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755892 4899 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755897 4899 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755901 4899 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755906 4899 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755911 4899 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755916 4899 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755920 4899 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755957 4899 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755962 4899 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755967 4899 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755972 4899 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755982 4899 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755987 4899 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755991 4899 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.755996 4899 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756000 4899 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756004 4899 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756009 4899 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756013 4899 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756018 4899 feature_gate.go:330] unrecognized feature gate: Example Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756022 4899 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756026 4899 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756031 4899 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756035 4899 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756040 4899 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756044 4899 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756050 4899 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756054 4899 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756059 4899 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756064 4899 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756069 4899 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756075 4899 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756080 4899 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756084 4899 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756089 4899 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756094 4899 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756099 4899 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756104 4899 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756108 4899 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756112 4899 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756117 4899 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756123 4899 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756128 4899 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756138 4899 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756142 4899 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756147 4899 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756151 4899 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756157 4899 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756163 4899 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756167 4899 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756172 4899 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756177 4899 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756181 4899 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756186 4899 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756190 4899 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756194 4899 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756212 4899 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756217 4899 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756222 4899 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756226 4899 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756230 4899 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756234 4899 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756241 4899 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756246 4899 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756250 4899 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756254 4899 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756258 4899 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.756263 4899 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.756492 4899 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.766857 4899 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.766903 4899 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767765 4899 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767820 4899 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767830 4899 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767839 4899 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767848 4899 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767857 4899 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767866 4899 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767874 4899 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767884 4899 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767894 4899 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767902 4899 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767910 4899 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767918 4899 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767952 4899 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767961 4899 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767969 4899 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767977 4899 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767985 4899 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.767998 4899 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768009 4899 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768018 4899 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768026 4899 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768034 4899 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768042 4899 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768051 4899 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768062 4899 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768073 4899 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768082 4899 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768091 4899 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768099 4899 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768107 4899 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768114 4899 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768123 4899 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768132 4899 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768150 4899 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768158 4899 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768166 4899 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768174 4899 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768182 4899 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768190 4899 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768198 4899 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768205 4899 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768213 4899 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768224 4899 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768276 4899 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768287 4899 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768297 4899 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768308 4899 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768320 4899 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768331 4899 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768344 4899 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768354 4899 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768364 4899 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768374 4899 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768387 4899 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768398 4899 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768409 4899 feature_gate.go:330] unrecognized feature gate: Example Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768419 4899 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768429 4899 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768440 4899 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768450 4899 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768459 4899 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768469 4899 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768480 4899 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768490 4899 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768503 4899 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768513 4899 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768523 4899 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768532 4899 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768542 4899 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768553 4899 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.768572 4899 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768901 4899 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768921 4899 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768966 4899 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768978 4899 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768988 4899 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.768998 4899 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769008 4899 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769018 4899 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769031 4899 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769040 4899 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769050 4899 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769061 4899 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769071 4899 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769081 4899 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769090 4899 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769100 4899 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769110 4899 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769120 4899 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769131 4899 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769140 4899 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769150 4899 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769160 4899 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769170 4899 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769179 4899 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769190 4899 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769205 4899 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769217 4899 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769227 4899 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769238 4899 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769248 4899 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769258 4899 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769268 4899 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769278 4899 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769289 4899 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769299 4899 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769309 4899 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769318 4899 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769327 4899 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769336 4899 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769347 4899 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769357 4899 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769367 4899 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769377 4899 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769390 4899 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769404 4899 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769415 4899 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769426 4899 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769436 4899 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769450 4899 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769463 4899 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769476 4899 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769486 4899 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769498 4899 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769509 4899 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769519 4899 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769530 4899 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769539 4899 feature_gate.go:330] unrecognized feature gate: Example Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769549 4899 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769558 4899 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769567 4899 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769576 4899 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769585 4899 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769592 4899 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769600 4899 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769608 4899 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769616 4899 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769624 4899 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769632 4899 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769640 4899 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769648 4899 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.769655 4899 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.769669 4899 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.770006 4899 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.775486 4899 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.775675 4899 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.776738 4899 server.go:997] "Starting client certificate rotation" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.776784 4899 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.777584 4899 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-24 15:42:33.264416294 +0000 UTC Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.777729 4899 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.784605 4899 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 20:55:10 crc kubenswrapper[4899]: E0126 20:55:10.786001 4899 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.787339 4899 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.795310 4899 log.go:25] "Validated CRI v1 runtime API" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.813867 4899 log.go:25] "Validated CRI v1 image API" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.816290 4899 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.819724 4899 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-20-51-32-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.819787 4899 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.848389 4899 manager.go:217] Machine: {Timestamp:2026-01-26 20:55:10.846011603 +0000 UTC m=+0.227599710 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:ad899ebe-e8fa-491d-aaa1-e267ccbcc124 BootID:b67aa14a-3c73-44b6-a040-2aaa760f288c Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:38:c3:bd Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:38:c3:bd Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:60:01:7b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:de:4b:f7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:46:19:1d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:2a:59:ad Speed:-1 Mtu:1496} {Name:eth10 MacAddress:b6:7f:4d:28:20:79 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:0a:0a:e6:6a:33:d8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.848859 4899 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.849148 4899 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.849828 4899 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.850241 4899 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.850304 4899 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.850707 4899 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.850730 4899 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.851125 4899 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.851173 4899 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.851789 4899 state_mem.go:36] "Initialized new in-memory state store" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.851984 4899 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.853026 4899 kubelet.go:418] "Attempting to sync node with API server" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.853071 4899 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.853130 4899 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.853156 4899 kubelet.go:324] "Adding apiserver pod source" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.853177 4899 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.855439 4899 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.855433 4899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 26 20:55:10 crc kubenswrapper[4899]: E0126 20:55:10.855556 4899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.855757 4899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 26 20:55:10 crc kubenswrapper[4899]: E0126 20:55:10.855900 4899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.856023 4899 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.857266 4899 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.858120 4899 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.858167 4899 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.858187 4899 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.858203 4899 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.858228 4899 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.858243 4899 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.858259 4899 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.858283 4899 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.858299 4899 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.858316 4899 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.858362 4899 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.858378 4899 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.858661 4899 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.859437 4899 server.go:1280] "Started kubelet" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.860082 4899 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.860588 4899 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.860079 4899 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.861490 4899 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.861851 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.861890 4899 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.861993 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 20:12:05.517584204 +0000 UTC Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.862345 4899 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.862380 4899 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 20:55:10 crc kubenswrapper[4899]: E0126 20:55:10.862429 4899 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 20:55:10 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.867022 4899 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.867783 4899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 26 20:55:10 crc kubenswrapper[4899]: E0126 20:55:10.867869 4899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.869393 4899 server.go:460] "Adding debug handlers to kubelet server" Jan 26 20:55:10 crc kubenswrapper[4899]: E0126 20:55:10.870208 4899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="200ms" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.870539 4899 factory.go:55] Registering systemd factory Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.870581 4899 factory.go:221] Registration of the systemd container factory successfully Jan 26 20:55:10 crc kubenswrapper[4899]: E0126 20:55:10.870469 4899 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.22:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e6352cbb65f63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 20:55:10.859378531 +0000 UTC m=+0.240966608,LastTimestamp:2026-01-26 20:55:10.859378531 +0000 UTC m=+0.240966608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.871036 4899 factory.go:153] Registering CRI-O factory Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.871061 4899 factory.go:221] Registration of the crio container factory successfully Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.871125 4899 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.871169 4899 factory.go:103] Registering Raw factory Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.871185 4899 manager.go:1196] Started watching for new ooms in manager Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.871974 4899 manager.go:319] Starting recovery of all containers Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.881994 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882405 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882423 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882442 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882457 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882470 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882485 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882498 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882514 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882529 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882543 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882557 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882569 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882614 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882627 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882640 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882654 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882666 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882679 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882693 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882705 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882720 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882732 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882782 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882796 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.882809 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883126 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883146 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883160 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883171 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883182 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883231 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883245 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883255 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883266 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883312 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883324 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883337 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883348 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883358 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883371 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883391 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883406 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883419 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883434 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883453 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883464 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883480 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883497 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883510 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883524 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883538 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883557 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883574 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883590 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883606 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883620 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883634 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883650 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883663 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883676 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883690 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883702 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883713 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883774 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883788 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883802 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883815 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883827 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883862 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883875 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883887 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883918 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883948 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883964 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883980 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.883993 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884005 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884018 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884029 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884042 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884053 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884066 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884079 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884092 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884104 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884117 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884131 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884144 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884156 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884172 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884186 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884202 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884214 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884228 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884240 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884254 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884267 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884281 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884319 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884331 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884345 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884356 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884371 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884390 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884403 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884416 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884430 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884443 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884456 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884469 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884482 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884500 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884514 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884595 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884614 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884627 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884639 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884651 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884666 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884678 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884691 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884704 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884717 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884731 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884744 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884756 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884768 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884783 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884796 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884809 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884822 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884836 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884848 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884860 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884875 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884890 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884902 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.884915 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885039 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885053 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885065 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885080 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885093 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885108 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885126 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885139 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885153 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885167 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885180 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885197 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885211 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885225 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885240 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885255 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885267 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885281 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885293 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885304 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885320 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885338 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885349 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885361 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885373 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885385 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885397 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885410 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885422 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885437 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885454 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885468 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885478 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885490 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885501 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885513 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885523 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885541 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885554 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885566 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885578 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885598 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885610 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885624 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885637 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885649 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885663 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885678 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885692 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885706 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885719 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885730 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885742 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885756 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885769 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885782 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885797 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885811 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885823 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885834 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885847 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885860 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885872 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885886 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885897 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.885908 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.901992 4899 manager.go:324] Recovery completed Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.903552 4899 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.903646 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.903672 4899 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.903690 4899 reconstruct.go:97] "Volume reconstruction finished" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.903701 4899 reconciler.go:26] "Reconciler: start to sync state" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.920548 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.923702 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.923806 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.923829 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.927063 4899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.929303 4899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.929351 4899 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.929385 4899 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 20:55:10 crc kubenswrapper[4899]: E0126 20:55:10.929432 4899 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.931516 4899 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.931549 4899 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.931578 4899 state_mem.go:36] "Initialized new in-memory state store" Jan 26 20:55:10 crc kubenswrapper[4899]: W0126 20:55:10.932797 4899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 26 20:55:10 crc kubenswrapper[4899]: E0126 20:55:10.932969 4899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.942848 4899 policy_none.go:49] "None policy: Start" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.943900 4899 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 20:55:10 crc kubenswrapper[4899]: I0126 20:55:10.943948 4899 state_mem.go:35] "Initializing new in-memory state store" Jan 26 20:55:10 crc kubenswrapper[4899]: E0126 20:55:10.962882 4899 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.002351 4899 manager.go:334] "Starting Device Plugin manager" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.002419 4899 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.002437 4899 server.go:79] "Starting device plugin registration server" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.002980 4899 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.002999 4899 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.003169 4899 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.003365 4899 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.003378 4899 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 20:55:11 crc kubenswrapper[4899]: E0126 20:55:11.014218 4899 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.029703 4899 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.029861 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.030709 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.030740 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.030749 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.030864 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.031285 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.031362 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.031535 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.031581 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.031594 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.031787 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.031989 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.032045 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.032579 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.032611 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.032621 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.033095 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.033181 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.033245 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.034098 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.034183 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.034247 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.034425 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.035052 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.035145 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.035209 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.035277 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.035291 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.035456 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.035647 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.035722 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.036597 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.036639 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.036659 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.036668 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.036704 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.036724 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.037056 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.037120 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.037187 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.037118 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.037306 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.038247 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.038284 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.038298 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: E0126 20:55:11.071116 4899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="400ms" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.103745 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.104994 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.105066 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.105077 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.105103 4899 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 20:55:11 crc kubenswrapper[4899]: E0126 20:55:11.105595 4899 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.22:6443: connect: connection refused" node="crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.106699 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.106740 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.106771 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.106798 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.106824 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.106849 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.106872 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.106897 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.106938 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.106962 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.106983 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.107005 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.107029 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.107049 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.107083 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.208010 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.208081 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.208119 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.208341 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.208356 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.208484 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.208560 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.208644 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.208712 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.208376 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.208739 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.208766 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.208809 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.208831 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.208892 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.208894 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.209016 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.209032 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.209066 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.209111 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.209130 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.209156 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.209216 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.209248 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.209258 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.209302 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.209328 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.209368 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.209437 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.209503 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.305985 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.307455 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.307495 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.307509 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.307536 4899 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 20:55:11 crc kubenswrapper[4899]: E0126 20:55:11.308092 4899 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.22:6443: connect: connection refused" node="crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.361138 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.376423 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.392631 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: W0126 20:55:11.395000 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4c5b1564d1d59bf355aa67566aa5e43e912c91e76085c5784dcf52feefe0301e WatchSource:0}: Error finding container 4c5b1564d1d59bf355aa67566aa5e43e912c91e76085c5784dcf52feefe0301e: Status 404 returned error can't find the container with id 4c5b1564d1d59bf355aa67566aa5e43e912c91e76085c5784dcf52feefe0301e Jan 26 20:55:11 crc kubenswrapper[4899]: W0126 20:55:11.405030 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f4fc45ca7b2b82c9e903933ac9533cda248f193b37544ae570374e43f62ecee7 WatchSource:0}: Error finding container f4fc45ca7b2b82c9e903933ac9533cda248f193b37544ae570374e43f62ecee7: Status 404 returned error can't find the container with id f4fc45ca7b2b82c9e903933ac9533cda248f193b37544ae570374e43f62ecee7 Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.421796 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.429338 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:11 crc kubenswrapper[4899]: W0126 20:55:11.437287 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-991dff6f9c0a0e49b63fbc468b6f4ab5df1d9d95b6c702043077712010aae03b WatchSource:0}: Error finding container 991dff6f9c0a0e49b63fbc468b6f4ab5df1d9d95b6c702043077712010aae03b: Status 404 returned error can't find the container with id 991dff6f9c0a0e49b63fbc468b6f4ab5df1d9d95b6c702043077712010aae03b Jan 26 20:55:11 crc kubenswrapper[4899]: W0126 20:55:11.448345 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-03ae9b74e685b2d11494dbd1e84adc30a988c33c89db634981b839edeeb76564 WatchSource:0}: Error finding container 03ae9b74e685b2d11494dbd1e84adc30a988c33c89db634981b839edeeb76564: Status 404 returned error can't find the container with id 03ae9b74e685b2d11494dbd1e84adc30a988c33c89db634981b839edeeb76564 Jan 26 20:55:11 crc kubenswrapper[4899]: E0126 20:55:11.473437 4899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="800ms" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.708480 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.710043 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.710090 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.710118 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.710147 4899 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 20:55:11 crc kubenswrapper[4899]: E0126 20:55:11.710674 4899 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.22:6443: connect: connection refused" node="crc" Jan 26 20:55:11 crc kubenswrapper[4899]: W0126 20:55:11.812464 4899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 26 20:55:11 crc kubenswrapper[4899]: E0126 20:55:11.812978 4899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.862145 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 05:01:19.851885923 +0000 UTC Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.862331 4899 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.943101 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af"} Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.943222 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1853b9ee3e985e6b024d9d4e67da27bcc5ac15dbc7423fdd35f669306e2d9d41"} Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.946245 4899 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d" exitCode=0 Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.946352 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d"} Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.946415 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"03ae9b74e685b2d11494dbd1e84adc30a988c33c89db634981b839edeeb76564"} Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.946642 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.948637 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.948675 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.948686 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.949037 4899 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="478c38fe8697263c28112f2e91401f7ddbdffdccc4ebb46a70fb8d066215b573" exitCode=0 Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.949154 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"478c38fe8697263c28112f2e91401f7ddbdffdccc4ebb46a70fb8d066215b573"} Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.949210 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"991dff6f9c0a0e49b63fbc468b6f4ab5df1d9d95b6c702043077712010aae03b"} Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.949399 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.950879 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.950918 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.950981 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.951000 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.952829 4899 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="04957116bc47667fa31fa0df4d91ab9b03496c10ae8ff3964d5a3814d37fd374" exitCode=0 Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.953170 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"04957116bc47667fa31fa0df4d91ab9b03496c10ae8ff3964d5a3814d37fd374"} Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.953267 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f4fc45ca7b2b82c9e903933ac9533cda248f193b37544ae570374e43f62ecee7"} Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.953496 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.954392 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.954454 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.954495 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.956823 4899 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105" exitCode=0 Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.956877 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105"} Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.956911 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4c5b1564d1d59bf355aa67566aa5e43e912c91e76085c5784dcf52feefe0301e"} Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.957240 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.957304 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.957349 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.957361 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.958248 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.958294 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:11 crc kubenswrapper[4899]: I0126 20:55:11.958311 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:12 crc kubenswrapper[4899]: W0126 20:55:12.033088 4899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 26 20:55:12 crc kubenswrapper[4899]: E0126 20:55:12.033199 4899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 26 20:55:12 crc kubenswrapper[4899]: W0126 20:55:12.131498 4899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 26 20:55:12 crc kubenswrapper[4899]: E0126 20:55:12.131587 4899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 26 20:55:12 crc kubenswrapper[4899]: E0126 20:55:12.274264 4899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="1.6s" Jan 26 20:55:12 crc kubenswrapper[4899]: W0126 20:55:12.444122 4899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 26 20:55:12 crc kubenswrapper[4899]: E0126 20:55:12.444240 4899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.511289 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.512714 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.512752 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.512763 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.512792 4899 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 20:55:12 crc kubenswrapper[4899]: E0126 20:55:12.513335 4899 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.22:6443: connect: connection refused" node="crc" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.802027 4899 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.862579 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 15:07:31.721001089 +0000 UTC Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.960701 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"97ec6ab43e988842fc52ebf3b5f8e16b54cceabf1cadbde4641af6ee3c2fe837"} Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.960749 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ac177559351b6d0279c6e1a62bb0bcf85ccbd161b8ac29079d7b463b964402b9"} Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.960763 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ad6337a3488685aada7e51eb60802f0e97119ef23a01e08db83d1950da3b2755"} Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.960943 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.961656 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.961679 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.961689 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.965248 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e"} Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.965273 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed"} Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.965282 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3"} Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.965340 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.970207 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.970234 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.970244 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.974152 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7"} Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.974380 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d"} Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.974390 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c"} Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.974399 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4"} Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.975887 4899 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0ec189fa247192289c1f682073bd0fa4d9318e597e6aad5cadc0c0149dd612cd" exitCode=0 Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.975945 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0ec189fa247192289c1f682073bd0fa4d9318e597e6aad5cadc0c0149dd612cd"} Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.976159 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.977116 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.977143 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.977155 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.981039 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bfe78673c3c9d93a82c34200cda3ec05c07d2b88c77242644fb81bfb8823589b"} Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.981178 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.982443 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.986261 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.986312 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:12 crc kubenswrapper[4899]: I0126 20:55:12.986327 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.862831 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:13:29.625624718 +0000 UTC Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.987331 4899 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7d0c85a4032534742d966574b7ad3501324a70e760d9e4f04248f2ffd5dec072" exitCode=0 Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.987393 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7d0c85a4032534742d966574b7ad3501324a70e760d9e4f04248f2ffd5dec072"} Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.987725 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.989388 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.989425 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.989440 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.991118 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073"} Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.991136 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.991285 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.992854 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.992882 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.992893 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.992947 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.992970 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.992982 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:13 crc kubenswrapper[4899]: I0126 20:55:13.997199 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.114291 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.115558 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.115594 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.115604 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.115627 4899 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.863706 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:17:11.071286756 +0000 UTC Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.997889 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.998433 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0bb12d4f860867706c8a9a4455cee94703e4453e7b675c5eb6a40f4c1a2b531b"} Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.998473 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f66a049373b9d2ba022fb8406970c04d5fd5e8669e82d0392e25c23ebe65c8c8"} Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.998487 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4eebacfaa14ea138fce054167caad9e0094f0d277455079fc2a2fec032a34bcb"} Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.998500 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dc58190a2b7e22a1627999707d004a1997e304dbdb27feae2fe3967f81d3ef63"} Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.998572 4899 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.998596 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.998955 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.998981 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.998992 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.999614 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.999631 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:14 crc kubenswrapper[4899]: I0126 20:55:14.999639 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:15 crc kubenswrapper[4899]: I0126 20:55:15.184585 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:15 crc kubenswrapper[4899]: I0126 20:55:15.235261 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:15 crc kubenswrapper[4899]: I0126 20:55:15.864859 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 22:45:10.56839575 +0000 UTC Jan 26 20:55:15 crc kubenswrapper[4899]: I0126 20:55:15.983613 4899 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 20:55:15 crc kubenswrapper[4899]: I0126 20:55:15.983727 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 20:55:16 crc kubenswrapper[4899]: I0126 20:55:16.005350 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"abf310f8398aea79336a0d1af1eb5fdbe2cea52abde2a09f0ed57ba2ea0f37d2"} Jan 26 20:55:16 crc kubenswrapper[4899]: I0126 20:55:16.005382 4899 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 20:55:16 crc kubenswrapper[4899]: I0126 20:55:16.005482 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:16 crc kubenswrapper[4899]: I0126 20:55:16.005502 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:16 crc kubenswrapper[4899]: I0126 20:55:16.005523 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:16 crc kubenswrapper[4899]: I0126 20:55:16.007200 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:16 crc kubenswrapper[4899]: I0126 20:55:16.007230 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:16 crc kubenswrapper[4899]: I0126 20:55:16.007240 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:16 crc kubenswrapper[4899]: I0126 20:55:16.007262 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:16 crc kubenswrapper[4899]: I0126 20:55:16.007304 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:16 crc kubenswrapper[4899]: I0126 20:55:16.007326 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:16 crc kubenswrapper[4899]: I0126 20:55:16.007363 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:16 crc kubenswrapper[4899]: I0126 20:55:16.007410 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:16 crc kubenswrapper[4899]: I0126 20:55:16.007435 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:16 crc kubenswrapper[4899]: I0126 20:55:16.865748 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:06:23.058092286 +0000 UTC Jan 26 20:55:17 crc kubenswrapper[4899]: I0126 20:55:17.008113 4899 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 20:55:17 crc kubenswrapper[4899]: I0126 20:55:17.008133 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:17 crc kubenswrapper[4899]: I0126 20:55:17.008156 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:17 crc kubenswrapper[4899]: I0126 20:55:17.009267 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:17 crc kubenswrapper[4899]: I0126 20:55:17.009268 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:17 crc kubenswrapper[4899]: I0126 20:55:17.009413 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:17 crc kubenswrapper[4899]: I0126 20:55:17.009427 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:17 crc kubenswrapper[4899]: I0126 20:55:17.009384 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:17 crc kubenswrapper[4899]: I0126 20:55:17.009483 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:17 crc kubenswrapper[4899]: I0126 20:55:17.582593 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 20:55:17 crc kubenswrapper[4899]: I0126 20:55:17.866334 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 23:44:09.507016019 +0000 UTC Jan 26 20:55:18 crc kubenswrapper[4899]: I0126 20:55:18.010685 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:18 crc kubenswrapper[4899]: I0126 20:55:18.012353 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:18 crc kubenswrapper[4899]: I0126 20:55:18.012399 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:18 crc kubenswrapper[4899]: I0126 20:55:18.012437 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:18 crc kubenswrapper[4899]: I0126 20:55:18.380760 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:18 crc kubenswrapper[4899]: I0126 20:55:18.381130 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:18 crc kubenswrapper[4899]: I0126 20:55:18.382808 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:18 crc kubenswrapper[4899]: I0126 20:55:18.382890 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:18 crc kubenswrapper[4899]: I0126 20:55:18.382914 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:18 crc kubenswrapper[4899]: I0126 20:55:18.755519 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 20:55:18 crc kubenswrapper[4899]: I0126 20:55:18.867017 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 02:30:29.954421951 +0000 UTC Jan 26 20:55:19 crc kubenswrapper[4899]: I0126 20:55:19.014286 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:19 crc kubenswrapper[4899]: I0126 20:55:19.015870 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:19 crc kubenswrapper[4899]: I0126 20:55:19.015942 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:19 crc kubenswrapper[4899]: I0126 20:55:19.015958 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:19 crc kubenswrapper[4899]: I0126 20:55:19.867796 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:42:47.823346188 +0000 UTC Jan 26 20:55:20 crc kubenswrapper[4899]: I0126 20:55:20.021274 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 20:55:20 crc kubenswrapper[4899]: I0126 20:55:20.021551 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:20 crc kubenswrapper[4899]: I0126 20:55:20.022841 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:20 crc kubenswrapper[4899]: I0126 20:55:20.022877 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:20 crc kubenswrapper[4899]: I0126 20:55:20.022888 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:20 crc kubenswrapper[4899]: I0126 20:55:20.432136 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:55:20 crc kubenswrapper[4899]: I0126 20:55:20.432417 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:20 crc kubenswrapper[4899]: I0126 20:55:20.434213 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:20 crc kubenswrapper[4899]: I0126 20:55:20.434260 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:20 crc kubenswrapper[4899]: I0126 20:55:20.434269 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:20 crc kubenswrapper[4899]: I0126 20:55:20.446300 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:55:20 crc kubenswrapper[4899]: I0126 20:55:20.868756 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:41:30.926458126 +0000 UTC Jan 26 20:55:21 crc kubenswrapper[4899]: E0126 20:55:21.014366 4899 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 20:55:21 crc kubenswrapper[4899]: I0126 20:55:21.018649 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:21 crc kubenswrapper[4899]: I0126 20:55:21.018896 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:55:21 crc kubenswrapper[4899]: I0126 20:55:21.019892 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:21 crc kubenswrapper[4899]: I0126 20:55:21.019943 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:21 crc kubenswrapper[4899]: I0126 20:55:21.019955 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:21 crc kubenswrapper[4899]: I0126 20:55:21.024176 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:55:21 crc kubenswrapper[4899]: I0126 20:55:21.869361 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:10:40.937753829 +0000 UTC Jan 26 20:55:22 crc kubenswrapper[4899]: I0126 20:55:22.021803 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:22 crc kubenswrapper[4899]: I0126 20:55:22.023545 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:22 crc kubenswrapper[4899]: I0126 20:55:22.023622 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:22 crc kubenswrapper[4899]: I0126 20:55:22.023651 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:22 crc kubenswrapper[4899]: E0126 20:55:22.803656 4899 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 20:55:22 crc kubenswrapper[4899]: I0126 20:55:22.862329 4899 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 20:55:22 crc kubenswrapper[4899]: I0126 20:55:22.870968 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:05:16.315041616 +0000 UTC Jan 26 20:55:23 crc kubenswrapper[4899]: I0126 20:55:23.024312 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:23 crc kubenswrapper[4899]: I0126 20:55:23.025769 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:23 crc kubenswrapper[4899]: I0126 20:55:23.025836 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:23 crc kubenswrapper[4899]: I0126 20:55:23.025860 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:23 crc kubenswrapper[4899]: I0126 20:55:23.871412 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:38:27.480595705 +0000 UTC Jan 26 20:55:23 crc kubenswrapper[4899]: E0126 20:55:23.875697 4899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 26 20:55:24 crc kubenswrapper[4899]: E0126 20:55:24.117068 4899 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 26 20:55:24 crc kubenswrapper[4899]: W0126 20:55:24.297846 4899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 20:55:24 crc kubenswrapper[4899]: I0126 20:55:24.298059 4899 trace.go:236] Trace[665425670]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 20:55:14.296) (total time: 10001ms): Jan 26 20:55:24 crc kubenswrapper[4899]: Trace[665425670]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:55:24.297) Jan 26 20:55:24 crc kubenswrapper[4899]: Trace[665425670]: [10.001664223s] [10.001664223s] END Jan 26 20:55:24 crc kubenswrapper[4899]: E0126 20:55:24.298106 4899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 20:55:24 crc kubenswrapper[4899]: I0126 20:55:24.586186 4899 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 20:55:24 crc kubenswrapper[4899]: I0126 20:55:24.586271 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 20:55:24 crc kubenswrapper[4899]: I0126 20:55:24.594598 4899 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 20:55:24 crc kubenswrapper[4899]: I0126 20:55:24.594713 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 20:55:24 crc kubenswrapper[4899]: I0126 20:55:24.871914 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:11:39.408652036 +0000 UTC Jan 26 20:55:25 crc kubenswrapper[4899]: I0126 20:55:25.194065 4899 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]log ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]etcd ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/priority-and-fairness-filter ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/start-apiextensions-informers ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/start-apiextensions-controllers ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/crd-informer-synced ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/start-system-namespaces-controller ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 26 20:55:25 crc kubenswrapper[4899]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 26 20:55:25 crc kubenswrapper[4899]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/bootstrap-controller ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/start-kube-aggregator-informers ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/apiservice-registration-controller ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/apiservice-discovery-controller ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]autoregister-completion ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/apiservice-openapi-controller ok Jan 26 20:55:25 crc kubenswrapper[4899]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 26 20:55:25 crc kubenswrapper[4899]: livez check failed Jan 26 20:55:25 crc kubenswrapper[4899]: I0126 20:55:25.194164 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 20:55:25 crc kubenswrapper[4899]: I0126 20:55:25.872808 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 07:22:49.974839248 +0000 UTC Jan 26 20:55:25 crc kubenswrapper[4899]: I0126 20:55:25.983510 4899 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 20:55:25 crc kubenswrapper[4899]: I0126 20:55:25.983578 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 20:55:26 crc kubenswrapper[4899]: I0126 20:55:26.873982 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 18:18:38.290689303 +0000 UTC Jan 26 20:55:27 crc kubenswrapper[4899]: I0126 20:55:27.147046 4899 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 20:55:27 crc kubenswrapper[4899]: I0126 20:55:27.170405 4899 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 20:55:27 crc kubenswrapper[4899]: I0126 20:55:27.318816 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:27 crc kubenswrapper[4899]: I0126 20:55:27.324243 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:27 crc kubenswrapper[4899]: I0126 20:55:27.324303 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:27 crc kubenswrapper[4899]: I0126 20:55:27.324319 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:27 crc kubenswrapper[4899]: I0126 20:55:27.324359 4899 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 20:55:27 crc kubenswrapper[4899]: E0126 20:55:27.330441 4899 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 20:55:27 crc kubenswrapper[4899]: I0126 20:55:27.874914 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 09:30:31.660150348 +0000 UTC Jan 26 20:55:28 crc kubenswrapper[4899]: I0126 20:55:28.786817 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 20:55:28 crc kubenswrapper[4899]: I0126 20:55:28.787109 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:28 crc kubenswrapper[4899]: I0126 20:55:28.788605 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:28 crc kubenswrapper[4899]: I0126 20:55:28.788660 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:28 crc kubenswrapper[4899]: I0126 20:55:28.788680 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:28 crc kubenswrapper[4899]: I0126 20:55:28.804149 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 20:55:28 crc kubenswrapper[4899]: I0126 20:55:28.876000 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:05:15.448641533 +0000 UTC Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.038077 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.039344 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.039442 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.039470 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.577518 4899 trace.go:236] Trace[1407434747]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 20:55:15.071) (total time: 14505ms): Jan 26 20:55:29 crc kubenswrapper[4899]: Trace[1407434747]: ---"Objects listed" error: 14505ms (20:55:29.577) Jan 26 20:55:29 crc kubenswrapper[4899]: Trace[1407434747]: [14.505998731s] [14.505998731s] END Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.577572 4899 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.578737 4899 trace.go:236] Trace[317579533]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 20:55:14.569) (total time: 15009ms): Jan 26 20:55:29 crc kubenswrapper[4899]: Trace[317579533]: ---"Objects listed" error: 15009ms (20:55:29.578) Jan 26 20:55:29 crc kubenswrapper[4899]: Trace[317579533]: [15.009505798s] [15.009505798s] END Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.578783 4899 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.580212 4899 trace.go:236] Trace[1212020906]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 20:55:14.816) (total time: 14763ms): Jan 26 20:55:29 crc kubenswrapper[4899]: Trace[1212020906]: ---"Objects listed" error: 14763ms (20:55:29.579) Jan 26 20:55:29 crc kubenswrapper[4899]: Trace[1212020906]: [14.763731021s] [14.763731021s] END Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.580246 4899 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.580447 4899 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.621742 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.622024 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.623603 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.623646 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.623659 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:29 crc kubenswrapper[4899]: I0126 20:55:29.877048 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:24:46.64184824 +0000 UTC Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.042729 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.045040 4899 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073" exitCode=255 Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.045083 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073"} Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.045401 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.046507 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.046557 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.046574 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.047480 4899 scope.go:117] "RemoveContainer" containerID="4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.183213 4899 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.198396 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.241977 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.865357 4899 apiserver.go:52] "Watching apiserver" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.868022 4899 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.868322 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.868776 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.868857 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.868956 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.869054 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 20:55:30 crc kubenswrapper[4899]: E0126 20:55:30.869336 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.869921 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:30 crc kubenswrapper[4899]: E0126 20:55:30.869960 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:55:30 crc kubenswrapper[4899]: E0126 20:55:30.870015 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.870048 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.871974 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.873257 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.873484 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.873621 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.874643 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.874892 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.875015 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.875044 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.875127 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.877181 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:27:02.01608898 +0000 UTC Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.904592 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.919774 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.931878 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.950569 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.968109 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.968243 4899 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.985534 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989238 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989290 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989318 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989348 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989374 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989405 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989435 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989466 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989493 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989525 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989568 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989598 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989629 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989662 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989690 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989717 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989745 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989772 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989801 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989826 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989852 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989877 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989902 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989947 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.989976 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990004 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990033 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990065 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990092 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990117 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990141 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990168 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990197 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990220 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990249 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990272 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990294 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990320 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990342 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990367 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990390 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990414 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990438 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990462 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990488 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990512 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990535 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990562 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990587 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990634 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990663 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990688 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990712 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990738 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990768 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990794 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990821 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990845 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990871 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990899 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990945 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.990973 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991000 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991023 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991050 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991075 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991100 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991132 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991187 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991212 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991236 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991259 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991284 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991312 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991334 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991363 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991385 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991405 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991428 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991451 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991482 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991506 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991530 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991553 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991576 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991602 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991626 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991650 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991032 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991270 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991457 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991769 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.992889 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991973 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.992086 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.992081 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.992283 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.992287 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.992473 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.992539 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.992631 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.992664 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.992816 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.993028 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.993053 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.993323 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.993342 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.993402 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.993478 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.993643 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.993669 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.993748 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.991673 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.994541 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.994573 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.994598 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.994622 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 20:55:30 crc kubenswrapper[4899]: I0126 20:55:30.994647 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994678 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994706 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994743 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994770 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994798 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994826 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994853 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994877 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994899 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994941 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994969 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994995 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995044 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995069 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995094 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995119 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995143 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995185 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995212 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995237 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995261 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995284 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995310 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995333 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995358 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995385 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995408 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995521 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995549 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999202 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999239 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999264 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999288 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999310 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999331 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999351 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999369 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999388 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999409 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999427 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999445 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999463 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999483 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999500 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999517 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999540 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999559 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999578 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999598 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999616 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999635 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999656 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999673 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999690 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999707 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999726 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999743 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999760 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999776 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999795 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999812 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999830 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999848 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999872 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999897 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999919 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.999989 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000011 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000029 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000048 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000066 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000085 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000102 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000120 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000136 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000153 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000173 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000193 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000212 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000231 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000247 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000270 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000290 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000308 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000325 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000342 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000358 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000381 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000403 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000423 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000443 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000466 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000484 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000505 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000523 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000544 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000563 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000580 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000600 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000619 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000640 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000693 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000728 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000752 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000775 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000796 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000817 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000841 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000865 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000896 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000967 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000993 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001014 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001033 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001053 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001134 4899 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001148 4899 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001159 4899 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001171 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001181 4899 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001192 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001202 4899 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001212 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001223 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001235 4899 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001246 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001258 4899 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001269 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001279 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001290 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001300 4899 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001310 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001321 4899 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001333 4899 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001343 4899 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001357 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001367 4899 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001377 4899 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001389 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.002110 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.993788 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994221 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994261 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994250 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994328 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994633 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994663 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994952 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.994964 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995005 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995031 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995078 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:30.995348 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000036 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000129 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000380 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.000652 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001055 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001795 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.001957 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.002052 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.002362 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.002394 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.002563 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.002671 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.002735 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.002755 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.002786 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.002810 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.003130 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.003209 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.003232 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.003358 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.003402 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.003444 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.003659 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.003750 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.004234 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.004372 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.004294 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.004818 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.004625 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.005676 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:55:31.50564623 +0000 UTC m=+20.887234307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.006984 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.005867 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.006701 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.007114 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.007141 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.006183 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.006408 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.006362 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.006499 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.006515 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.006598 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.006707 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.007255 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.006757 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.006767 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.006779 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.007459 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.007466 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.007719 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.008138 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.008165 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.008502 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.008659 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.010013 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.010574 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.010672 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.010883 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.011011 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.011330 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.011532 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.011777 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.012073 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.012099 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.012960 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.013176 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.013371 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.012687 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.013703 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.013990 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.015804 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.014141 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.014177 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.014543 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.015909 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.014669 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.015429 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.015684 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.016555 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.016694 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.017292 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.019387 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.019481 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.020510 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.020841 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.022017 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.022424 4899 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.022526 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:31.522493413 +0000 UTC m=+20.904081650 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.022630 4899 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.023207 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.023432 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:31.523408459 +0000 UTC m=+20.904996706 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.028033 4899 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.030658 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.033655 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.035903 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.037370 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.037714 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.038334 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.042172 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.043048 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.048037 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.049386 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.049428 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.049445 4899 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.049524 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:31.549497551 +0000 UTC m=+20.931085748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.049661 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.049899 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.037225 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.054955 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.055105 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.055729 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.055956 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.056007 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.056195 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.056341 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.056572 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.056905 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.056968 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.056993 4899 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.057131 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:31.557050523 +0000 UTC m=+20.938638790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.057286 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.058774 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.059220 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.059683 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.060541 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.060772 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.059744 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.062217 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.063042 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.063683 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.065111 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.065496 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.065507 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.065501 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.065606 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.065630 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.065421 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.066191 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.066530 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.066725 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.067062 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.067464 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.067533 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.067630 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.067893 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.068296 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.068305 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.068642 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.068938 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.069302 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.069519 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.069628 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.070516 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.073903 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.073968 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.074005 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.074049 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.074296 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.074411 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.074522 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.074547 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.075683 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.074692 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.078033 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.079610 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.080998 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.081208 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54"} Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.081537 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.081883 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.081900 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.082017 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.084431 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.085830 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.086059 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.086123 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.086313 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.087400 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.088005 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.088702 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.088728 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.088999 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.089531 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.090845 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.093639 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.095996 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.102038 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.102545 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.102654 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.102775 4899 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.102846 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.102912 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.103111 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.103197 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.103254 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.103320 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.103375 4899 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.103439 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.103502 4899 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.103560 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.103625 4899 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.103691 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.103755 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.103813 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.103880 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.103964 4899 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.104029 4899 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.104090 4899 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.104165 4899 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.104238 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.104321 4899 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.104378 4899 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.104436 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.104497 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.104553 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.104612 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.104668 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.104719 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.104773 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.104831 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.104886 4899 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.104966 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.105024 4899 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.105095 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.105160 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.105222 4899 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.105281 4899 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.105347 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.105413 4899 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.105470 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.105534 4899 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.105592 4899 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.105666 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.105724 4899 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.105786 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.105846 4899 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.105904 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.105980 4899 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.102767 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.102788 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106043 4899 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106122 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106136 4899 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.103454 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106151 4899 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106250 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106277 4899 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106299 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106330 4899 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106351 4899 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106371 4899 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106400 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106420 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106440 4899 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106468 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106486 4899 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106504 4899 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106523 4899 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106552 4899 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106572 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106592 4899 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106610 4899 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106628 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106650 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106671 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106691 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106721 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106738 4899 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106757 4899 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106776 4899 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106796 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106816 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106838 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106856 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106873 4899 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106891 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106909 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106949 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106967 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.106985 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107003 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107022 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107039 4899 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107057 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107075 4899 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107094 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107113 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107164 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107184 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107202 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107219 4899 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107236 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107255 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107273 4899 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107294 4899 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107314 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107332 4899 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107351 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107370 4899 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107389 4899 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107407 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107424 4899 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107442 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107461 4899 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107479 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107497 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107517 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107537 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107556 4899 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107574 4899 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107591 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107610 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107630 4899 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107648 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107665 4899 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107685 4899 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107703 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107739 4899 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107756 4899 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107773 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107791 4899 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107808 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107825 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107843 4899 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107861 4899 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107877 4899 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107894 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107911 4899 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107960 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107980 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.107998 4899 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108016 4899 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108033 4899 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108050 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108067 4899 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108085 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108104 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108122 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108140 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108158 4899 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108176 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108194 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108214 4899 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108232 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108251 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108271 4899 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108288 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108304 4899 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108322 4899 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108339 4899 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108357 4899 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108375 4899 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108392 4899 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108408 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108426 4899 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108441 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108458 4899 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108476 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108493 4899 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.108510 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.116181 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.119582 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.125201 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.149149 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.162490 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.172951 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.182540 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.187872 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.190996 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.196447 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.201688 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.208959 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.208990 4899 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 20:55:31 crc kubenswrapper[4899]: W0126 20:55:31.212024 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-c71f9997bf95a45f8e51c968b2137e92e3c306d397641a6b764b749f18072109 WatchSource:0}: Error finding container c71f9997bf95a45f8e51c968b2137e92e3c306d397641a6b764b749f18072109: Status 404 returned error can't find the container with id c71f9997bf95a45f8e51c968b2137e92e3c306d397641a6b764b749f18072109 Jan 26 20:55:31 crc kubenswrapper[4899]: W0126 20:55:31.214397 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-894cd6b780c6054f5dc58a33456b5cc2ea3182050db7f88a1d1438f741332e6e WatchSource:0}: Error finding container 894cd6b780c6054f5dc58a33456b5cc2ea3182050db7f88a1d1438f741332e6e: Status 404 returned error can't find the container with id 894cd6b780c6054f5dc58a33456b5cc2ea3182050db7f88a1d1438f741332e6e Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.219822 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.232761 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.511344 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.511828 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:55:32.511797497 +0000 UTC m=+21.893385534 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.557617 4899 csr.go:261] certificate signing request csr-mn5sc is approved, waiting to be issued Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.613080 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.613125 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.613145 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.613165 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.613248 4899 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.613310 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:32.613296056 +0000 UTC m=+21.994884093 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.613334 4899 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.613393 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.613430 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.613446 4899 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.613460 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:32.61343157 +0000 UTC m=+21.995019607 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.613522 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:32.613496562 +0000 UTC m=+21.995084779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.613635 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.613651 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.613662 4899 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:31 crc kubenswrapper[4899]: E0126 20:55:31.613701 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:32.613690567 +0000 UTC m=+21.995278604 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.643543 4899 csr.go:257] certificate signing request csr-mn5sc is issued Jan 26 20:55:31 crc kubenswrapper[4899]: I0126 20:55:31.881331 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:27:34.511689001 +0000 UTC Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.062262 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wwvzr"] Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.062906 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-vlmbq"] Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.063132 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.063227 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vlmbq" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.065171 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.065348 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.065371 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.065398 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.065471 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.066216 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.067123 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.067891 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.085480 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"894cd6b780c6054f5dc58a33456b5cc2ea3182050db7f88a1d1438f741332e6e"} Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.087550 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f"} Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.087595 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768"} Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.087606 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c71f9997bf95a45f8e51c968b2137e92e3c306d397641a6b764b749f18072109"} Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.089243 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e"} Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.089884 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1cc6235cd04b61a5bb5e5afb9defdc5baf6fa1f45abf531872f643cb986f3cb1"} Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.105050 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.117603 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/af2334b6-f4a1-489a-acb2-0ddef342559d-rootfs\") pod \"machine-config-daemon-wwvzr\" (UID: \"af2334b6-f4a1-489a-acb2-0ddef342559d\") " pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.117653 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af2334b6-f4a1-489a-acb2-0ddef342559d-mcd-auth-proxy-config\") pod \"machine-config-daemon-wwvzr\" (UID: \"af2334b6-f4a1-489a-acb2-0ddef342559d\") " pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.117681 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5n4f\" (UniqueName: \"kubernetes.io/projected/af2334b6-f4a1-489a-acb2-0ddef342559d-kube-api-access-j5n4f\") pod \"machine-config-daemon-wwvzr\" (UID: \"af2334b6-f4a1-489a-acb2-0ddef342559d\") " pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.117821 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2msg\" (UniqueName: \"kubernetes.io/projected/7eb474cc-d8b2-4d69-a738-90b30e635e94-kube-api-access-s2msg\") pod \"node-resolver-vlmbq\" (UID: \"7eb474cc-d8b2-4d69-a738-90b30e635e94\") " pod="openshift-dns/node-resolver-vlmbq" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.117858 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7eb474cc-d8b2-4d69-a738-90b30e635e94-hosts-file\") pod \"node-resolver-vlmbq\" (UID: \"7eb474cc-d8b2-4d69-a738-90b30e635e94\") " pod="openshift-dns/node-resolver-vlmbq" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.117884 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/af2334b6-f4a1-489a-acb2-0ddef342559d-proxy-tls\") pod \"machine-config-daemon-wwvzr\" (UID: \"af2334b6-f4a1-489a-acb2-0ddef342559d\") " pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.122376 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.138278 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.154032 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.172817 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.210074 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.218620 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2msg\" (UniqueName: \"kubernetes.io/projected/7eb474cc-d8b2-4d69-a738-90b30e635e94-kube-api-access-s2msg\") pod \"node-resolver-vlmbq\" (UID: \"7eb474cc-d8b2-4d69-a738-90b30e635e94\") " pod="openshift-dns/node-resolver-vlmbq" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.218650 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7eb474cc-d8b2-4d69-a738-90b30e635e94-hosts-file\") pod \"node-resolver-vlmbq\" (UID: \"7eb474cc-d8b2-4d69-a738-90b30e635e94\") " pod="openshift-dns/node-resolver-vlmbq" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.218675 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/af2334b6-f4a1-489a-acb2-0ddef342559d-proxy-tls\") pod \"machine-config-daemon-wwvzr\" (UID: \"af2334b6-f4a1-489a-acb2-0ddef342559d\") " pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.218741 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/af2334b6-f4a1-489a-acb2-0ddef342559d-rootfs\") pod \"machine-config-daemon-wwvzr\" (UID: \"af2334b6-f4a1-489a-acb2-0ddef342559d\") " pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.218758 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5n4f\" (UniqueName: \"kubernetes.io/projected/af2334b6-f4a1-489a-acb2-0ddef342559d-kube-api-access-j5n4f\") pod \"machine-config-daemon-wwvzr\" (UID: \"af2334b6-f4a1-489a-acb2-0ddef342559d\") " pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.218787 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af2334b6-f4a1-489a-acb2-0ddef342559d-mcd-auth-proxy-config\") pod \"machine-config-daemon-wwvzr\" (UID: \"af2334b6-f4a1-489a-acb2-0ddef342559d\") " pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.218845 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7eb474cc-d8b2-4d69-a738-90b30e635e94-hosts-file\") pod \"node-resolver-vlmbq\" (UID: \"7eb474cc-d8b2-4d69-a738-90b30e635e94\") " pod="openshift-dns/node-resolver-vlmbq" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.219424 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af2334b6-f4a1-489a-acb2-0ddef342559d-mcd-auth-proxy-config\") pod \"machine-config-daemon-wwvzr\" (UID: \"af2334b6-f4a1-489a-acb2-0ddef342559d\") " pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.219637 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/af2334b6-f4a1-489a-acb2-0ddef342559d-rootfs\") pod \"machine-config-daemon-wwvzr\" (UID: \"af2334b6-f4a1-489a-acb2-0ddef342559d\") " pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.223280 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/af2334b6-f4a1-489a-acb2-0ddef342559d-proxy-tls\") pod \"machine-config-daemon-wwvzr\" (UID: \"af2334b6-f4a1-489a-acb2-0ddef342559d\") " pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.241570 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5n4f\" (UniqueName: \"kubernetes.io/projected/af2334b6-f4a1-489a-acb2-0ddef342559d-kube-api-access-j5n4f\") pod \"machine-config-daemon-wwvzr\" (UID: \"af2334b6-f4a1-489a-acb2-0ddef342559d\") " pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.244436 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2msg\" (UniqueName: \"kubernetes.io/projected/7eb474cc-d8b2-4d69-a738-90b30e635e94-kube-api-access-s2msg\") pod \"node-resolver-vlmbq\" (UID: \"7eb474cc-d8b2-4d69-a738-90b30e635e94\") " pod="openshift-dns/node-resolver-vlmbq" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.261843 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.296411 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.329904 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.358021 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.371996 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.377373 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.384388 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vlmbq" Jan 26 20:55:32 crc kubenswrapper[4899]: W0126 20:55:32.388467 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf2334b6_f4a1_489a_acb2_0ddef342559d.slice/crio-45c8ede8dc4a352b5f8c09f09b43f7b09615a6b6b70934ddcc2ebf5d86babc03 WatchSource:0}: Error finding container 45c8ede8dc4a352b5f8c09f09b43f7b09615a6b6b70934ddcc2ebf5d86babc03: Status 404 returned error can't find the container with id 45c8ede8dc4a352b5f8c09f09b43f7b09615a6b6b70934ddcc2ebf5d86babc03 Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.389311 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: W0126 20:55:32.400223 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eb474cc_d8b2_4d69_a738_90b30e635e94.slice/crio-e11600865d57ac409c0b0b9993b59ebb49165861c0773aac941ff173e3b75ba6 WatchSource:0}: Error finding container e11600865d57ac409c0b0b9993b59ebb49165861c0773aac941ff173e3b75ba6: Status 404 returned error can't find the container with id e11600865d57ac409c0b0b9993b59ebb49165861c0773aac941ff173e3b75ba6 Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.430652 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.454153 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.463581 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-24sf9"] Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.464026 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.470612 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.470908 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mrvcx"] Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.471672 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-bpfpb"] Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.471929 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.472294 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.476434 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.479763 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.479990 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.480057 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.480142 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.480166 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.480195 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.480258 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.480307 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.480327 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.480364 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.480379 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.480463 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.481604 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.496831 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.512484 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.521764 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:55:32 crc kubenswrapper[4899]: E0126 20:55:32.521990 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:55:34.521954591 +0000 UTC m=+23.903542638 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.522185 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntjqb\" (UniqueName: \"kubernetes.io/projected/595ae596-1477-4438-94f7-69400dc1ba20-kube-api-access-ntjqb\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.522263 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-cni-bin\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.522336 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-ovnkube-config\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.522410 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cb93604e-ad41-45c0-959d-1af0694fd11d-system-cni-dir\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.522531 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cb93604e-ad41-45c0-959d-1af0694fd11d-cni-binary-copy\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.522612 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-slash\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.522685 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-env-overrides\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.522755 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-multus-socket-dir-parent\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.522831 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-ovn\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.522985 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-var-lib-cni-bin\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.523108 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-systemd-units\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.523202 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-run-netns\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.523291 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cb93604e-ad41-45c0-959d-1af0694fd11d-os-release\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.523400 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-os-release\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.523483 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-run-multus-certs\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.523575 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cb93604e-ad41-45c0-959d-1af0694fd11d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.523664 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-cnibin\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.523741 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30d7d720-d73a-488d-b6ec-755f5da1888c-ovn-node-metrics-cert\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.523834 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/595ae596-1477-4438-94f7-69400dc1ba20-multus-daemon-config\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.523922 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-multus-conf-dir\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.524062 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-systemd\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.524174 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt664\" (UniqueName: \"kubernetes.io/projected/30d7d720-d73a-488d-b6ec-755f5da1888c-kube-api-access-pt664\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.524271 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cb93604e-ad41-45c0-959d-1af0694fd11d-cnibin\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.524383 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-multus-cni-dir\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.524481 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-kubelet\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.524564 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk7ss\" (UniqueName: \"kubernetes.io/projected/cb93604e-ad41-45c0-959d-1af0694fd11d-kube-api-access-lk7ss\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.524658 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-var-lib-cni-multus\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.524742 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-var-lib-kubelet\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.524823 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-etc-kubernetes\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.524909 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-etc-openvswitch\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.525023 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-log-socket\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.525488 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-run-ovn-kubernetes\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.525633 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-run-k8s-cni-cncf-io\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.525673 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-ovnkube-script-lib\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.525701 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-run-netns\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.525728 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-var-lib-openvswitch\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.525754 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-cni-netd\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.525781 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/595ae596-1477-4438-94f7-69400dc1ba20-cni-binary-copy\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.525819 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-openvswitch\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.525850 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-hostroot\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.525879 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-system-cni-dir\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.525904 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-node-log\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.525929 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.525962 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cb93604e-ad41-45c0-959d-1af0694fd11d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.534142 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.552618 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.571879 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.594448 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.610819 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.625530 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.626802 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.626851 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.626873 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-multus-cni-dir\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.626890 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-kubelet\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.626907 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk7ss\" (UniqueName: \"kubernetes.io/projected/cb93604e-ad41-45c0-959d-1af0694fd11d-kube-api-access-lk7ss\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.626926 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-var-lib-kubelet\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.626945 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-etc-kubernetes\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.626977 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-etc-openvswitch\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.626993 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-log-socket\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627011 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-run-ovn-kubernetes\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627031 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-run-k8s-cni-cncf-io\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627046 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-var-lib-cni-multus\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627063 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-ovnkube-script-lib\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627077 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-var-lib-openvswitch\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627092 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-cni-netd\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627109 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/595ae596-1477-4438-94f7-69400dc1ba20-cni-binary-copy\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627124 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-run-netns\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627140 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-openvswitch\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627158 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627172 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-hostroot\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627189 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-system-cni-dir\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627205 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-node-log\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627221 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627236 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cb93604e-ad41-45c0-959d-1af0694fd11d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627253 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntjqb\" (UniqueName: \"kubernetes.io/projected/595ae596-1477-4438-94f7-69400dc1ba20-kube-api-access-ntjqb\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627268 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-cni-bin\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627282 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-ovnkube-config\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627298 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cb93604e-ad41-45c0-959d-1af0694fd11d-system-cni-dir\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627317 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cb93604e-ad41-45c0-959d-1af0694fd11d-cni-binary-copy\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627336 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-env-overrides\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627351 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-slash\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627360 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-run-netns\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627440 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-kubelet\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627469 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-run-k8s-cni-cncf-io\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627511 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-var-lib-openvswitch\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627529 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-multus-socket-dir-parent\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627552 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-multus-cni-dir\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627567 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-var-lib-cni-multus\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627599 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627602 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-etc-kubernetes\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627624 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-log-socket\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627649 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cb93604e-ad41-45c0-959d-1af0694fd11d-system-cni-dir\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627662 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-run-ovn-kubernetes\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627686 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-openvswitch\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627835 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-cni-bin\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.627364 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-multus-socket-dir-parent\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: E0126 20:55:32.627311 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 20:55:32 crc kubenswrapper[4899]: E0126 20:55:32.628397 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628400 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/595ae596-1477-4438-94f7-69400dc1ba20-cni-binary-copy\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: E0126 20:55:32.628416 4899 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:32 crc kubenswrapper[4899]: E0126 20:55:32.627401 4899 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628418 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cb93604e-ad41-45c0-959d-1af0694fd11d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628533 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-ovn\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628575 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628602 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-systemd-units\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628624 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-run-netns\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628643 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cb93604e-ad41-45c0-959d-1af0694fd11d-os-release\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628646 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-ovnkube-script-lib\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628665 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-os-release\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628686 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-var-lib-cni-bin\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628712 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-slash\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628706 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cb93604e-ad41-45c0-959d-1af0694fd11d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628767 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-run-multus-certs\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628784 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30d7d720-d73a-488d-b6ec-755f5da1888c-ovn-node-metrics-cert\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628803 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-cnibin\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628821 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/595ae596-1477-4438-94f7-69400dc1ba20-multus-daemon-config\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628839 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-systemd\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628854 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt664\" (UniqueName: \"kubernetes.io/projected/30d7d720-d73a-488d-b6ec-755f5da1888c-kube-api-access-pt664\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628873 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cb93604e-ad41-45c0-959d-1af0694fd11d-cnibin\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629004 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-multus-conf-dir\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629008 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cb93604e-ad41-45c0-959d-1af0694fd11d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629006 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-ovn\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629068 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cb93604e-ad41-45c0-959d-1af0694fd11d-cnibin\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: E0126 20:55:32.629076 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 20:55:32 crc kubenswrapper[4899]: E0126 20:55:32.629099 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 20:55:32 crc kubenswrapper[4899]: E0126 20:55:32.629109 4899 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:32 crc kubenswrapper[4899]: E0126 20:55:32.629115 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:34.629101799 +0000 UTC m=+24.010689836 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:32 crc kubenswrapper[4899]: E0126 20:55:32.629138 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:34.629129579 +0000 UTC m=+24.010717616 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629153 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cb93604e-ad41-45c0-959d-1af0694fd11d-cni-binary-copy\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629178 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cb93604e-ad41-45c0-959d-1af0694fd11d-os-release\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629186 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-var-lib-kubelet\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629193 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-os-release\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629211 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-systemd-units\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.628686 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-ovnkube-config\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: E0126 20:55:32.629220 4899 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629241 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-hostroot\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629249 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-var-lib-cni-bin\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629261 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-systemd\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: E0126 20:55:32.629273 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:34.629262743 +0000 UTC m=+24.010851020 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629292 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-node-log\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629313 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-run-netns\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: E0126 20:55:32.629323 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:34.629299344 +0000 UTC m=+24.010887381 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629316 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-cni-netd\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629339 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-system-cni-dir\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629340 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-etc-openvswitch\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629366 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-multus-conf-dir\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629367 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-host-run-multus-certs\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629377 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/595ae596-1477-4438-94f7-69400dc1ba20-cnibin\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.629718 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-env-overrides\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.630020 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/595ae596-1477-4438-94f7-69400dc1ba20-multus-daemon-config\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.632130 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30d7d720-d73a-488d-b6ec-755f5da1888c-ovn-node-metrics-cert\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.644762 4899 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 20:50:31 +0000 UTC, rotation deadline is 2026-12-05 11:24:01.512143745 +0000 UTC Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.644838 4899 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7502h28m28.867308865s for next certificate rotation Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.645625 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.647039 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk7ss\" (UniqueName: \"kubernetes.io/projected/cb93604e-ad41-45c0-959d-1af0694fd11d-kube-api-access-lk7ss\") pod \"multus-additional-cni-plugins-bpfpb\" (UID: \"cb93604e-ad41-45c0-959d-1af0694fd11d\") " pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.651322 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt664\" (UniqueName: \"kubernetes.io/projected/30d7d720-d73a-488d-b6ec-755f5da1888c-kube-api-access-pt664\") pod \"ovnkube-node-mrvcx\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.655764 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntjqb\" (UniqueName: \"kubernetes.io/projected/595ae596-1477-4438-94f7-69400dc1ba20-kube-api-access-ntjqb\") pod \"multus-24sf9\" (UID: \"595ae596-1477-4438-94f7-69400dc1ba20\") " pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.663098 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.675444 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.690526 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.711930 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.726885 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.744869 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.777132 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-24sf9" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.789200 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.798422 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:32 crc kubenswrapper[4899]: W0126 20:55:32.808130 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb93604e_ad41_45c0_959d_1af0694fd11d.slice/crio-e6650e0dc0ca72d8ea4b6a6a5de2b80f6e6fa3d9dd4b70d29363fd03b137ec98 WatchSource:0}: Error finding container e6650e0dc0ca72d8ea4b6a6a5de2b80f6e6fa3d9dd4b70d29363fd03b137ec98: Status 404 returned error can't find the container with id e6650e0dc0ca72d8ea4b6a6a5de2b80f6e6fa3d9dd4b70d29363fd03b137ec98 Jan 26 20:55:32 crc kubenswrapper[4899]: W0126 20:55:32.816023 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30d7d720_d73a_488d_b6ec_755f5da1888c.slice/crio-8b4bf2edb0344a2c53f01be5769d4f9fcba711d745b363acf0c1e4748e28534b WatchSource:0}: Error finding container 8b4bf2edb0344a2c53f01be5769d4f9fcba711d745b363acf0c1e4748e28534b: Status 404 returned error can't find the container with id 8b4bf2edb0344a2c53f01be5769d4f9fcba711d745b363acf0c1e4748e28534b Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.882489 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 03:33:35.521731256 +0000 UTC Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.930797 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.930862 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.930954 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:32 crc kubenswrapper[4899]: E0126 20:55:32.931288 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:55:32 crc kubenswrapper[4899]: E0126 20:55:32.931151 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:55:32 crc kubenswrapper[4899]: E0126 20:55:32.931387 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.935840 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.936621 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.937826 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.938508 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.939512 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.940072 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.941088 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.942918 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.943687 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.944681 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.945283 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.947948 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.948560 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.949323 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.950461 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.951098 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.952149 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.952596 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.953184 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.954242 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.954740 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.955809 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.956273 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.957355 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.957842 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.958521 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.959766 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.960294 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.961259 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.961785 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.962762 4899 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.962869 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.964611 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.965624 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.966121 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.968345 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.971126 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.972868 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.974204 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.974907 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.975917 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.976797 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.977870 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.978855 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.979357 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.980300 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.980823 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.982090 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.982593 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.983112 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.983966 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.984537 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.985522 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.986165 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.987173 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:55:32 crc kubenswrapper[4899]: I0126 20:55:32.991914 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.001849 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.002635 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.014110 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.037130 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.056212 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.076449 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.092659 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.093027 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerStarted","Data":"8b4bf2edb0344a2c53f01be5769d4f9fcba711d745b363acf0c1e4748e28534b"} Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.094386 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-24sf9" event={"ID":"595ae596-1477-4438-94f7-69400dc1ba20","Type":"ContainerStarted","Data":"9f9618c36010e1fc8db43ef9ad357ee88c775dd37d850ea2f89bb9987d2c4712"} Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.096852 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerStarted","Data":"aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623"} Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.096877 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerStarted","Data":"16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4"} Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.096890 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerStarted","Data":"45c8ede8dc4a352b5f8c09f09b43f7b09615a6b6b70934ddcc2ebf5d86babc03"} Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.098416 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" event={"ID":"cb93604e-ad41-45c0-959d-1af0694fd11d","Type":"ContainerStarted","Data":"e6650e0dc0ca72d8ea4b6a6a5de2b80f6e6fa3d9dd4b70d29363fd03b137ec98"} Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.099638 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vlmbq" event={"ID":"7eb474cc-d8b2-4d69-a738-90b30e635e94","Type":"ContainerStarted","Data":"62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83"} Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.099672 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vlmbq" event={"ID":"7eb474cc-d8b2-4d69-a738-90b30e635e94","Type":"ContainerStarted","Data":"e11600865d57ac409c0b0b9993b59ebb49165861c0773aac941ff173e3b75ba6"} Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.107267 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.121854 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.136555 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.150427 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.165036 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.189890 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.213564 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.236945 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.269466 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.286126 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.304474 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.327183 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.352506 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.371050 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.384161 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.396343 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.410125 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.423979 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.440501 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.731124 4899 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.732892 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.732928 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.732964 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.733071 4899 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.741039 4899 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.741350 4899 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.742438 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.742459 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.742467 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.742481 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.742490 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:33Z","lastTransitionTime":"2026-01-26T20:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:33 crc kubenswrapper[4899]: E0126 20:55:33.768100 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.772198 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.772241 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.772264 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.772280 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.772292 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:33Z","lastTransitionTime":"2026-01-26T20:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:33 crc kubenswrapper[4899]: E0126 20:55:33.785661 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.789221 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.789257 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.789266 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.789282 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.789292 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:33Z","lastTransitionTime":"2026-01-26T20:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:33 crc kubenswrapper[4899]: E0126 20:55:33.803666 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.808574 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.808610 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.808623 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.808675 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.808692 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:33Z","lastTransitionTime":"2026-01-26T20:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:33 crc kubenswrapper[4899]: E0126 20:55:33.820820 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.824323 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.824350 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.824359 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.824372 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.824384 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:33Z","lastTransitionTime":"2026-01-26T20:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:33 crc kubenswrapper[4899]: E0126 20:55:33.847155 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:33 crc kubenswrapper[4899]: E0126 20:55:33.847314 4899 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.849234 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.849275 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.849288 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.849307 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.849321 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:33Z","lastTransitionTime":"2026-01-26T20:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.883538 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:38:49.82860111 +0000 UTC Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.951551 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.951601 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.951611 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.951626 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:33 crc kubenswrapper[4899]: I0126 20:55:33.951636 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:33Z","lastTransitionTime":"2026-01-26T20:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.054179 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.054215 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.054225 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.054240 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.054251 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:34Z","lastTransitionTime":"2026-01-26T20:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.104393 4899 generic.go:334] "Generic (PLEG): container finished" podID="cb93604e-ad41-45c0-959d-1af0694fd11d" containerID="593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10" exitCode=0 Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.104479 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" event={"ID":"cb93604e-ad41-45c0-959d-1af0694fd11d","Type":"ContainerDied","Data":"593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10"} Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.106126 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401"} Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.108180 4899 generic.go:334] "Generic (PLEG): container finished" podID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerID="284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a" exitCode=0 Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.108274 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerDied","Data":"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a"} Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.109962 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-24sf9" event={"ID":"595ae596-1477-4438-94f7-69400dc1ba20","Type":"ContainerStarted","Data":"04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5"} Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.122928 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.146967 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.159483 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.159535 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.159546 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.159563 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.159578 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:34Z","lastTransitionTime":"2026-01-26T20:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.168832 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.185744 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.202747 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.215287 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.238196 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.253888 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.262086 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.262131 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.262146 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.262168 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.262181 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:34Z","lastTransitionTime":"2026-01-26T20:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.271449 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.285921 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.299827 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.320538 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.333626 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.345769 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.363531 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.365545 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.365570 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.365579 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.365597 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.365607 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:34Z","lastTransitionTime":"2026-01-26T20:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.382915 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.402010 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.414006 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.427094 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.439700 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.458217 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.469028 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.469272 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.469399 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.469520 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.469640 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:34Z","lastTransitionTime":"2026-01-26T20:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.471779 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.486558 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.501391 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.518890 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.537196 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.547969 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:55:34 crc kubenswrapper[4899]: E0126 20:55:34.548196 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:55:38.548162163 +0000 UTC m=+27.929750210 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.572093 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.572134 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.572144 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.572163 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.572176 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:34Z","lastTransitionTime":"2026-01-26T20:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.649392 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.649442 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.649474 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.649508 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:34 crc kubenswrapper[4899]: E0126 20:55:34.649638 4899 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 20:55:34 crc kubenswrapper[4899]: E0126 20:55:34.649716 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:38.649695853 +0000 UTC m=+28.031283890 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 20:55:34 crc kubenswrapper[4899]: E0126 20:55:34.649649 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 20:55:34 crc kubenswrapper[4899]: E0126 20:55:34.649766 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 20:55:34 crc kubenswrapper[4899]: E0126 20:55:34.649783 4899 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:34 crc kubenswrapper[4899]: E0126 20:55:34.649811 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:38.649803476 +0000 UTC m=+28.031391513 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:34 crc kubenswrapper[4899]: E0126 20:55:34.649646 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 20:55:34 crc kubenswrapper[4899]: E0126 20:55:34.649841 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 20:55:34 crc kubenswrapper[4899]: E0126 20:55:34.649851 4899 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:34 crc kubenswrapper[4899]: E0126 20:55:34.649878 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:38.649872168 +0000 UTC m=+28.031460205 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:34 crc kubenswrapper[4899]: E0126 20:55:34.649642 4899 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 20:55:34 crc kubenswrapper[4899]: E0126 20:55:34.649917 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:38.649903279 +0000 UTC m=+28.031491306 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.724321 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.724353 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.724362 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.724377 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.724388 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:34Z","lastTransitionTime":"2026-01-26T20:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.798144 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-t8lnv"] Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.798600 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t8lnv" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.800125 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.801338 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.801488 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.801513 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.815346 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.827346 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.827386 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.827398 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.827416 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.827427 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:34Z","lastTransitionTime":"2026-01-26T20:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.829151 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.840780 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.851367 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7-host\") pod \"node-ca-t8lnv\" (UID: \"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\") " pod="openshift-image-registry/node-ca-t8lnv" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.851396 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56nqg\" (UniqueName: \"kubernetes.io/projected/6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7-kube-api-access-56nqg\") pod \"node-ca-t8lnv\" (UID: \"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\") " pod="openshift-image-registry/node-ca-t8lnv" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.851426 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7-serviceca\") pod \"node-ca-t8lnv\" (UID: \"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\") " pod="openshift-image-registry/node-ca-t8lnv" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.865151 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.878050 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.884225 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:10:41.667051738 +0000 UTC Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.900544 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.914848 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.926401 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.929539 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.929659 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:34 crc kubenswrapper[4899]: E0126 20:55:34.929779 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.929809 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:34 crc kubenswrapper[4899]: E0126 20:55:34.930243 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:55:34 crc kubenswrapper[4899]: E0126 20:55:34.930640 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.933385 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.933426 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.933445 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.933464 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.933477 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:34Z","lastTransitionTime":"2026-01-26T20:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.938982 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.951966 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7-host\") pod \"node-ca-t8lnv\" (UID: \"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\") " pod="openshift-image-registry/node-ca-t8lnv" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.952001 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56nqg\" (UniqueName: \"kubernetes.io/projected/6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7-kube-api-access-56nqg\") pod \"node-ca-t8lnv\" (UID: \"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\") " pod="openshift-image-registry/node-ca-t8lnv" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.952028 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7-serviceca\") pod \"node-ca-t8lnv\" (UID: \"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\") " pod="openshift-image-registry/node-ca-t8lnv" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.952292 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7-host\") pod \"node-ca-t8lnv\" (UID: \"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\") " pod="openshift-image-registry/node-ca-t8lnv" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.952932 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7-serviceca\") pod \"node-ca-t8lnv\" (UID: \"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\") " pod="openshift-image-registry/node-ca-t8lnv" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.954368 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.969411 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56nqg\" (UniqueName: \"kubernetes.io/projected/6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7-kube-api-access-56nqg\") pod \"node-ca-t8lnv\" (UID: \"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\") " pod="openshift-image-registry/node-ca-t8lnv" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.970997 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.983324 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:34 crc kubenswrapper[4899]: I0126 20:55:34.998029 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.021883 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.035627 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.035662 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.035672 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.035689 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.035702 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:35Z","lastTransitionTime":"2026-01-26T20:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.112946 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t8lnv" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.119476 4899 generic.go:334] "Generic (PLEG): container finished" podID="cb93604e-ad41-45c0-959d-1af0694fd11d" containerID="7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005" exitCode=0 Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.119527 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" event={"ID":"cb93604e-ad41-45c0-959d-1af0694fd11d","Type":"ContainerDied","Data":"7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005"} Jan 26 20:55:35 crc kubenswrapper[4899]: W0126 20:55:35.124227 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e3eee89_3332_4ac0_8c40_c7b77bfd9ee7.slice/crio-ba0e80cad714a5ad29f69f5afb692a829a5aff2870fc29c8a4426f41846885a1 WatchSource:0}: Error finding container ba0e80cad714a5ad29f69f5afb692a829a5aff2870fc29c8a4426f41846885a1: Status 404 returned error can't find the container with id ba0e80cad714a5ad29f69f5afb692a829a5aff2870fc29c8a4426f41846885a1 Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.125203 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerStarted","Data":"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502"} Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.125237 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerStarted","Data":"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250"} Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.125246 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerStarted","Data":"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013"} Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.133316 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.138230 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.138252 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.138261 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.138273 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.138283 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:35Z","lastTransitionTime":"2026-01-26T20:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.147031 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.164852 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.178666 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.205728 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.223591 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.241570 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.241912 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.241994 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.242011 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.242036 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.242059 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:35Z","lastTransitionTime":"2026-01-26T20:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.257821 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.272952 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.289247 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.300246 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.312544 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.326530 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.342725 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.346096 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.346137 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.346151 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.346169 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.346182 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:35Z","lastTransitionTime":"2026-01-26T20:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.448625 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.448661 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.448673 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.448690 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.448702 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:35Z","lastTransitionTime":"2026-01-26T20:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.551441 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.551472 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.551480 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.551502 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.551514 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:35Z","lastTransitionTime":"2026-01-26T20:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.656449 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.656477 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.656487 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.656617 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.656627 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:35Z","lastTransitionTime":"2026-01-26T20:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.760462 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.760516 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.760529 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.760548 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.760559 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:35Z","lastTransitionTime":"2026-01-26T20:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.863728 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.863779 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.863798 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.863818 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.863831 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:35Z","lastTransitionTime":"2026-01-26T20:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.885138 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:19:41.661758698 +0000 UTC Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.965679 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.965712 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.965720 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.965734 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:35 crc kubenswrapper[4899]: I0126 20:55:35.965743 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:35Z","lastTransitionTime":"2026-01-26T20:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.067563 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.067629 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.067642 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.067660 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.067675 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:36Z","lastTransitionTime":"2026-01-26T20:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.130496 4899 generic.go:334] "Generic (PLEG): container finished" podID="cb93604e-ad41-45c0-959d-1af0694fd11d" containerID="deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b" exitCode=0 Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.130564 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" event={"ID":"cb93604e-ad41-45c0-959d-1af0694fd11d","Type":"ContainerDied","Data":"deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b"} Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.134519 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerStarted","Data":"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5"} Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.134568 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerStarted","Data":"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4"} Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.134584 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerStarted","Data":"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af"} Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.135834 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t8lnv" event={"ID":"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7","Type":"ContainerStarted","Data":"d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d"} Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.135864 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t8lnv" event={"ID":"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7","Type":"ContainerStarted","Data":"ba0e80cad714a5ad29f69f5afb692a829a5aff2870fc29c8a4426f41846885a1"} Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.151023 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.163399 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.170093 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.170143 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.170156 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.170176 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.170188 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:36Z","lastTransitionTime":"2026-01-26T20:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.181811 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.196019 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.208788 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.219681 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.232475 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.247722 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.260051 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.276125 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.277305 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.277333 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.277343 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.277363 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.277376 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:36Z","lastTransitionTime":"2026-01-26T20:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.287337 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.303533 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.315053 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.325382 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.338023 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.348029 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.359366 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.370977 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.384188 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.384231 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.384243 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.384261 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.384275 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:36Z","lastTransitionTime":"2026-01-26T20:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.387095 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.396584 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.415010 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.427143 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.437456 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.447422 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.456706 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.473735 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.487060 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.487494 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.487582 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.487642 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.487726 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.487813 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:36Z","lastTransitionTime":"2026-01-26T20:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.499035 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:36Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.590443 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.590482 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.590491 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.590507 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.590518 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:36Z","lastTransitionTime":"2026-01-26T20:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.693364 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.693402 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.693413 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.693430 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.693444 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:36Z","lastTransitionTime":"2026-01-26T20:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.795317 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.795362 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.795374 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.795391 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.795404 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:36Z","lastTransitionTime":"2026-01-26T20:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.885716 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:36:47.193469299 +0000 UTC Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.897400 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.897434 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.897443 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.897458 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.897467 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:36Z","lastTransitionTime":"2026-01-26T20:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.930131 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.930145 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:36 crc kubenswrapper[4899]: E0126 20:55:36.930529 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.930180 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:36 crc kubenswrapper[4899]: E0126 20:55:36.930625 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:55:36 crc kubenswrapper[4899]: E0126 20:55:36.930993 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.999638 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.999673 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.999682 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.999700 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:36 crc kubenswrapper[4899]: I0126 20:55:36.999712 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:36Z","lastTransitionTime":"2026-01-26T20:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.102531 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.102573 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.102582 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.102596 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.102607 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:37Z","lastTransitionTime":"2026-01-26T20:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.142216 4899 generic.go:334] "Generic (PLEG): container finished" podID="cb93604e-ad41-45c0-959d-1af0694fd11d" containerID="4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878" exitCode=0 Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.142256 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" event={"ID":"cb93604e-ad41-45c0-959d-1af0694fd11d","Type":"ContainerDied","Data":"4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878"} Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.160508 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.180512 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.202347 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.205677 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.205708 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.205717 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.205731 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.205743 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:37Z","lastTransitionTime":"2026-01-26T20:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.219310 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.237153 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.253349 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.271044 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.288718 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.305670 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.309548 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.309735 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.309872 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.309897 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.309911 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:37Z","lastTransitionTime":"2026-01-26T20:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.323252 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.351926 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.367466 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.384944 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.401369 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.413648 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.413692 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.413706 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.413726 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.413782 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:37Z","lastTransitionTime":"2026-01-26T20:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.516525 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.516576 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.516588 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.516605 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.516614 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:37Z","lastTransitionTime":"2026-01-26T20:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.619095 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.619143 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.619157 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.619178 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.619192 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:37Z","lastTransitionTime":"2026-01-26T20:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.721628 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.721673 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.721684 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.721702 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.721719 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:37Z","lastTransitionTime":"2026-01-26T20:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.825819 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.826475 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.826669 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.826845 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.827089 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:37Z","lastTransitionTime":"2026-01-26T20:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.886019 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 05:10:32.1872443 +0000 UTC Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.930041 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.930133 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.930158 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.930188 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:37 crc kubenswrapper[4899]: I0126 20:55:37.930212 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:37Z","lastTransitionTime":"2026-01-26T20:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.033159 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.033416 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.033494 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.033606 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.033690 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:38Z","lastTransitionTime":"2026-01-26T20:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.136683 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.136726 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.136740 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.136758 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.136770 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:38Z","lastTransitionTime":"2026-01-26T20:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.147885 4899 generic.go:334] "Generic (PLEG): container finished" podID="cb93604e-ad41-45c0-959d-1af0694fd11d" containerID="2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f" exitCode=0 Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.147950 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" event={"ID":"cb93604e-ad41-45c0-959d-1af0694fd11d","Type":"ContainerDied","Data":"2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f"} Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.155304 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerStarted","Data":"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0"} Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.171478 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.194621 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.217893 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.238315 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.251073 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.251136 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.251168 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.251197 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.251218 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:38Z","lastTransitionTime":"2026-01-26T20:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.257147 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.274569 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.288294 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.304170 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.317918 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.328570 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.353170 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.355289 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.355322 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.355330 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.355366 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.355376 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:38Z","lastTransitionTime":"2026-01-26T20:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.370622 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.382248 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.394262 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.459653 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.459732 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.459769 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.459788 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.459809 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:38Z","lastTransitionTime":"2026-01-26T20:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.562597 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.562921 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.563056 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.563162 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.563270 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:38Z","lastTransitionTime":"2026-01-26T20:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.592652 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:55:38 crc kubenswrapper[4899]: E0126 20:55:38.592965 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:55:46.592917974 +0000 UTC m=+35.974506021 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.666400 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.666443 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.666456 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.666474 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.666488 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:38Z","lastTransitionTime":"2026-01-26T20:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.694438 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.694506 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.694552 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.694597 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:38 crc kubenswrapper[4899]: E0126 20:55:38.694662 4899 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 20:55:38 crc kubenswrapper[4899]: E0126 20:55:38.694686 4899 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 20:55:38 crc kubenswrapper[4899]: E0126 20:55:38.694733 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:46.694710761 +0000 UTC m=+36.076298808 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 20:55:38 crc kubenswrapper[4899]: E0126 20:55:38.694786 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 20:55:38 crc kubenswrapper[4899]: E0126 20:55:38.694797 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 20:55:38 crc kubenswrapper[4899]: E0126 20:55:38.694850 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 20:55:38 crc kubenswrapper[4899]: E0126 20:55:38.694874 4899 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:38 crc kubenswrapper[4899]: E0126 20:55:38.694816 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 20:55:38 crc kubenswrapper[4899]: E0126 20:55:38.694972 4899 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:38 crc kubenswrapper[4899]: E0126 20:55:38.694789 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:46.694764033 +0000 UTC m=+36.076352080 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 20:55:38 crc kubenswrapper[4899]: E0126 20:55:38.695085 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:46.69502721 +0000 UTC m=+36.076615287 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:38 crc kubenswrapper[4899]: E0126 20:55:38.695113 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 20:55:46.695099822 +0000 UTC m=+36.076687899 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.769280 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.769339 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.769360 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.769385 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.769404 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:38Z","lastTransitionTime":"2026-01-26T20:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.873140 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.873189 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.873207 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.873231 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.873249 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:38Z","lastTransitionTime":"2026-01-26T20:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.887174 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 22:38:22.912101632 +0000 UTC Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.930361 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.930388 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.930444 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:38 crc kubenswrapper[4899]: E0126 20:55:38.931388 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:55:38 crc kubenswrapper[4899]: E0126 20:55:38.931511 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:55:38 crc kubenswrapper[4899]: E0126 20:55:38.931741 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.980081 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.980694 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.981005 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.981306 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:38 crc kubenswrapper[4899]: I0126 20:55:38.982063 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:38Z","lastTransitionTime":"2026-01-26T20:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.086158 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.086206 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.086219 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.086236 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.086247 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:39Z","lastTransitionTime":"2026-01-26T20:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.163241 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" event={"ID":"cb93604e-ad41-45c0-959d-1af0694fd11d","Type":"ContainerStarted","Data":"0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164"} Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.189130 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.189193 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.189206 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.189224 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.189237 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:39Z","lastTransitionTime":"2026-01-26T20:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.292593 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.292877 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.292889 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.292908 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.292919 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:39Z","lastTransitionTime":"2026-01-26T20:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.395630 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.395752 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.395771 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.395795 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.395815 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:39Z","lastTransitionTime":"2026-01-26T20:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.498444 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.498509 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.498532 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.498560 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.498585 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:39Z","lastTransitionTime":"2026-01-26T20:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.601217 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.601251 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.601290 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.601305 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.601315 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:39Z","lastTransitionTime":"2026-01-26T20:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.704367 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.704434 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.704462 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.704491 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.704513 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:39Z","lastTransitionTime":"2026-01-26T20:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.807091 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.807166 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.807186 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.807211 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.807232 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:39Z","lastTransitionTime":"2026-01-26T20:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.888575 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:13:36.703360694 +0000 UTC Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.910436 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.910493 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.910510 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.910534 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:39 crc kubenswrapper[4899]: I0126 20:55:39.910553 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:39Z","lastTransitionTime":"2026-01-26T20:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.013182 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.013288 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.013307 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.013334 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.013354 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:40Z","lastTransitionTime":"2026-01-26T20:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.115176 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.115227 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.115246 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.115273 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.115291 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:40Z","lastTransitionTime":"2026-01-26T20:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.170395 4899 generic.go:334] "Generic (PLEG): container finished" podID="cb93604e-ad41-45c0-959d-1af0694fd11d" containerID="0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164" exitCode=0 Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.170438 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" event={"ID":"cb93604e-ad41-45c0-959d-1af0694fd11d","Type":"ContainerDied","Data":"0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164"} Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.185414 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.202760 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.225906 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.230600 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.230640 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.230652 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.230670 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.230686 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:40Z","lastTransitionTime":"2026-01-26T20:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.241115 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.261298 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.275241 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.292148 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.307652 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.320878 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.332849 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.332887 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.332896 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.332945 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.332959 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:40Z","lastTransitionTime":"2026-01-26T20:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.333336 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.342600 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.364206 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.377992 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.389216 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.436052 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.436112 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.436125 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.436144 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.436159 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:40Z","lastTransitionTime":"2026-01-26T20:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.539160 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.539213 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.539229 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.539252 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.539270 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:40Z","lastTransitionTime":"2026-01-26T20:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.641456 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.641489 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.641498 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.641513 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.641523 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:40Z","lastTransitionTime":"2026-01-26T20:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.774711 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.774770 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.774796 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.774828 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.774851 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:40Z","lastTransitionTime":"2026-01-26T20:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.779297 4899 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.878715 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.878769 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.878786 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.878809 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.878873 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:40Z","lastTransitionTime":"2026-01-26T20:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.889422 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:58:11.25618978 +0000 UTC Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.930364 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.930402 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.930366 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:40 crc kubenswrapper[4899]: E0126 20:55:40.930493 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:55:40 crc kubenswrapper[4899]: E0126 20:55:40.930970 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:55:40 crc kubenswrapper[4899]: E0126 20:55:40.931227 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.958008 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.970986 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.981533 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.981574 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.981600 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.981618 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.981630 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:40Z","lastTransitionTime":"2026-01-26T20:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:40 crc kubenswrapper[4899]: I0126 20:55:40.989726 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.009199 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.026027 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.039761 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.053669 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.067329 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.084445 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.084481 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.084492 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.084510 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.084523 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:41Z","lastTransitionTime":"2026-01-26T20:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.090665 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.106030 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.120627 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.140795 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.164532 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.179130 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerStarted","Data":"9dbbb5e2e0c10eb35cb096e8e7409f162c15623fcae99a0127002b54a3fd4bfe"} Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.179436 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.179492 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.181238 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.186107 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" event={"ID":"cb93604e-ad41-45c0-959d-1af0694fd11d","Type":"ContainerStarted","Data":"492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160"} Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.186707 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.186731 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.186738 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.186749 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.186759 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:41Z","lastTransitionTime":"2026-01-26T20:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.195396 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.202741 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.206687 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.212867 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.229233 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.244917 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.262523 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.279922 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.289262 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.289357 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.289376 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.289430 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.289448 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:41Z","lastTransitionTime":"2026-01-26T20:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.293688 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.308448 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.329359 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.344263 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.356995 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.368246 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.384358 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbbb5e2e0c10eb35cb096e8e7409f162c15623fcae99a0127002b54a3fd4bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.391587 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.391615 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.391622 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.391636 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.391649 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:41Z","lastTransitionTime":"2026-01-26T20:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.402732 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.415106 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.426497 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.438135 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.456568 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.469307 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.483082 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.494411 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.494443 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.494453 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.494467 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.494477 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:41Z","lastTransitionTime":"2026-01-26T20:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.502830 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.521894 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.536299 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.551236 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.568672 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.595287 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbbb5e2e0c10eb35cb096e8e7409f162c15623fcae99a0127002b54a3fd4bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.597251 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.597292 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.597305 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.597323 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.597336 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:41Z","lastTransitionTime":"2026-01-26T20:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.613815 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.630364 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.700450 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.700484 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.700493 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.700507 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.700518 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:41Z","lastTransitionTime":"2026-01-26T20:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.803651 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.803699 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.803711 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.803729 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.803741 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:41Z","lastTransitionTime":"2026-01-26T20:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.890917 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:00:22.785191003 +0000 UTC Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.910293 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.910347 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.910362 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.910386 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:41 crc kubenswrapper[4899]: I0126 20:55:41.910402 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:41Z","lastTransitionTime":"2026-01-26T20:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.013696 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.013737 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.013749 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.013770 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.013789 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:42Z","lastTransitionTime":"2026-01-26T20:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.116552 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.116596 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.116610 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.116629 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.116648 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:42Z","lastTransitionTime":"2026-01-26T20:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.188703 4899 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.219377 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.219419 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.219430 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.219445 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.219456 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:42Z","lastTransitionTime":"2026-01-26T20:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.322863 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.322954 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.322977 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.323021 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.323044 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:42Z","lastTransitionTime":"2026-01-26T20:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.426890 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.426975 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.427002 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.427031 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.427054 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:42Z","lastTransitionTime":"2026-01-26T20:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.529701 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.529776 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.529785 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.529799 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.529809 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:42Z","lastTransitionTime":"2026-01-26T20:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.632319 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.632363 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.632374 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.632393 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.632405 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:42Z","lastTransitionTime":"2026-01-26T20:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.734491 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.734527 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.734535 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.734556 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.734568 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:42Z","lastTransitionTime":"2026-01-26T20:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.837098 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.837133 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.837142 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.837158 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.837168 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:42Z","lastTransitionTime":"2026-01-26T20:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.891882 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 16:47:54.641056396 +0000 UTC Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.930341 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.930362 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.930415 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:42 crc kubenswrapper[4899]: E0126 20:55:42.930468 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:55:42 crc kubenswrapper[4899]: E0126 20:55:42.930560 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:55:42 crc kubenswrapper[4899]: E0126 20:55:42.930636 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.939047 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.939313 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.939487 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.939663 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:42 crc kubenswrapper[4899]: I0126 20:55:42.939829 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:42Z","lastTransitionTime":"2026-01-26T20:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.042992 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.043059 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.043082 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.043110 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.043132 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:43Z","lastTransitionTime":"2026-01-26T20:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.147647 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.147689 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.147698 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.147713 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.147722 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:43Z","lastTransitionTime":"2026-01-26T20:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.191584 4899 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.250187 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.250247 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.250265 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.250291 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.250312 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:43Z","lastTransitionTime":"2026-01-26T20:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.307102 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.352705 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.352772 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.352788 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.352815 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.352853 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:43Z","lastTransitionTime":"2026-01-26T20:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.455214 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.455268 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.455284 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.455311 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.455330 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:43Z","lastTransitionTime":"2026-01-26T20:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.558052 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.558107 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.558124 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.558148 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.558165 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:43Z","lastTransitionTime":"2026-01-26T20:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.660680 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.660716 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.660725 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.660740 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.660750 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:43Z","lastTransitionTime":"2026-01-26T20:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.763176 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.763234 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.763256 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.763299 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.763321 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:43Z","lastTransitionTime":"2026-01-26T20:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.866388 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.866439 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.866450 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.866475 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.866490 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:43Z","lastTransitionTime":"2026-01-26T20:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.892880 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:19:35.212032194 +0000 UTC Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.968602 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.968638 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.968646 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.968660 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:43 crc kubenswrapper[4899]: I0126 20:55:43.968670 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:43Z","lastTransitionTime":"2026-01-26T20:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.071088 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.071136 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.071156 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.071179 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.071196 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:44Z","lastTransitionTime":"2026-01-26T20:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.174341 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.174377 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.174386 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.174400 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.174408 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:44Z","lastTransitionTime":"2026-01-26T20:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.202245 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovnkube-controller/0.log" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.206103 4899 generic.go:334] "Generic (PLEG): container finished" podID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerID="9dbbb5e2e0c10eb35cb096e8e7409f162c15623fcae99a0127002b54a3fd4bfe" exitCode=1 Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.206163 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerDied","Data":"9dbbb5e2e0c10eb35cb096e8e7409f162c15623fcae99a0127002b54a3fd4bfe"} Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.207715 4899 scope.go:117] "RemoveContainer" containerID="9dbbb5e2e0c10eb35cb096e8e7409f162c15623fcae99a0127002b54a3fd4bfe" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.215464 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.215556 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.215576 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.215599 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.215657 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:44Z","lastTransitionTime":"2026-01-26T20:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.231746 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: E0126 20:55:44.236110 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.243061 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.243103 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.243130 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.243147 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.243157 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:44Z","lastTransitionTime":"2026-01-26T20:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.252818 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: E0126 20:55:44.262989 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.266782 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.266818 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.266829 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.266844 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.266855 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:44Z","lastTransitionTime":"2026-01-26T20:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.272285 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: E0126 20:55:44.283227 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.285878 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.286793 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.286855 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.286867 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.286886 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.286935 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:44Z","lastTransitionTime":"2026-01-26T20:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.298419 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: E0126 20:55:44.300714 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.303720 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.303756 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.303769 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.303786 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.303805 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:44Z","lastTransitionTime":"2026-01-26T20:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.315750 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbbb5e2e0c10eb35cb096e8e7409f162c15623fcae99a0127002b54a3fd4bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dbbb5e2e0c10eb35cb096e8e7409f162c15623fcae99a0127002b54a3fd4bfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:55:43Z\\\",\\\"message\\\":\\\"ing reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 20:55:43.861495 6237 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 20:55:43.861814 6237 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 20:55:43.862424 6237 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 20:55:43.862495 6237 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 20:55:43.862520 6237 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 20:55:43.862530 6237 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 20:55:43.862557 6237 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 20:55:43.862568 6237 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 20:55:43.862570 6237 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 20:55:43.862603 6237 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 20:55:43.863106 6237 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 20:55:43.863161 6237 factory.go:656] Stopping watch factory\\\\nI0126 20:55:43.863182 6237 ovnkube.go:599] Stopped ovnkube\\\\nI0126 20:55:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: E0126 20:55:44.315833 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: E0126 20:55:44.316301 4899 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.319047 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.319079 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.319090 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.319107 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.319119 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:44Z","lastTransitionTime":"2026-01-26T20:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.328142 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.339083 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.352130 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.366595 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.378357 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.390667 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.400794 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.415777 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.421267 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.421305 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.421315 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.421332 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.421343 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:44Z","lastTransitionTime":"2026-01-26T20:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.523805 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.523850 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.523865 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.523887 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.523906 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:44Z","lastTransitionTime":"2026-01-26T20:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.626340 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.626387 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.626400 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.626417 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.626429 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:44Z","lastTransitionTime":"2026-01-26T20:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.728310 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.728347 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.728360 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.728380 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.728393 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:44Z","lastTransitionTime":"2026-01-26T20:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.831052 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.831090 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.831101 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.831116 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.831129 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:44Z","lastTransitionTime":"2026-01-26T20:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.894003 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:19:16.657623772 +0000 UTC Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.929678 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.929776 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.929681 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:44 crc kubenswrapper[4899]: E0126 20:55:44.929798 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:55:44 crc kubenswrapper[4899]: E0126 20:55:44.930032 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:55:44 crc kubenswrapper[4899]: E0126 20:55:44.930123 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.933271 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.933298 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.933310 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.933325 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.933336 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:44Z","lastTransitionTime":"2026-01-26T20:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.973609 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4"] Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.974311 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.976308 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.976448 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.985082 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:44 crc kubenswrapper[4899]: I0126 20:55:44.996725 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.009076 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.021848 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.033413 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.035060 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.035083 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.035092 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.035108 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.035119 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:45Z","lastTransitionTime":"2026-01-26T20:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.046952 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.060680 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.067252 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2thn4\" (UniqueName: \"kubernetes.io/projected/2285c985-da54-4035-b72d-06f9c067f463-kube-api-access-2thn4\") pod \"ovnkube-control-plane-749d76644c-vl6k4\" (UID: \"2285c985-da54-4035-b72d-06f9c067f463\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.067305 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2285c985-da54-4035-b72d-06f9c067f463-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vl6k4\" (UID: \"2285c985-da54-4035-b72d-06f9c067f463\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.067332 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2285c985-da54-4035-b72d-06f9c067f463-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vl6k4\" (UID: \"2285c985-da54-4035-b72d-06f9c067f463\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.067356 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2285c985-da54-4035-b72d-06f9c067f463-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vl6k4\" (UID: \"2285c985-da54-4035-b72d-06f9c067f463\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.071450 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.082761 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.094055 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.106588 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.115972 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.135173 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbbb5e2e0c10eb35cb096e8e7409f162c15623fcae99a0127002b54a3fd4bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dbbb5e2e0c10eb35cb096e8e7409f162c15623fcae99a0127002b54a3fd4bfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:55:43Z\\\",\\\"message\\\":\\\"ing reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 20:55:43.861495 6237 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 20:55:43.861814 6237 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 20:55:43.862424 6237 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 20:55:43.862495 6237 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 20:55:43.862520 6237 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 20:55:43.862530 6237 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 20:55:43.862557 6237 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 20:55:43.862568 6237 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 20:55:43.862570 6237 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 20:55:43.862603 6237 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 20:55:43.863106 6237 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 20:55:43.863161 6237 factory.go:656] Stopping watch factory\\\\nI0126 20:55:43.863182 6237 ovnkube.go:599] Stopped ovnkube\\\\nI0126 20:55:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.136907 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.136969 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.136984 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.137002 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.137017 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:45Z","lastTransitionTime":"2026-01-26T20:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.151074 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.163960 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.168290 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2285c985-da54-4035-b72d-06f9c067f463-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vl6k4\" (UID: \"2285c985-da54-4035-b72d-06f9c067f463\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.168319 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2285c985-da54-4035-b72d-06f9c067f463-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vl6k4\" (UID: \"2285c985-da54-4035-b72d-06f9c067f463\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.168338 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2285c985-da54-4035-b72d-06f9c067f463-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vl6k4\" (UID: \"2285c985-da54-4035-b72d-06f9c067f463\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.168382 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2thn4\" (UniqueName: \"kubernetes.io/projected/2285c985-da54-4035-b72d-06f9c067f463-kube-api-access-2thn4\") pod \"ovnkube-control-plane-749d76644c-vl6k4\" (UID: \"2285c985-da54-4035-b72d-06f9c067f463\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.169223 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2285c985-da54-4035-b72d-06f9c067f463-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vl6k4\" (UID: \"2285c985-da54-4035-b72d-06f9c067f463\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.169286 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2285c985-da54-4035-b72d-06f9c067f463-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vl6k4\" (UID: \"2285c985-da54-4035-b72d-06f9c067f463\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.173954 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2285c985-da54-4035-b72d-06f9c067f463-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vl6k4\" (UID: \"2285c985-da54-4035-b72d-06f9c067f463\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.185583 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2thn4\" (UniqueName: \"kubernetes.io/projected/2285c985-da54-4035-b72d-06f9c067f463-kube-api-access-2thn4\") pod \"ovnkube-control-plane-749d76644c-vl6k4\" (UID: \"2285c985-da54-4035-b72d-06f9c067f463\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.210570 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovnkube-controller/0.log" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.213368 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerStarted","Data":"5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c"} Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.214315 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.229966 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.239407 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.239426 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.239454 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.239468 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.239478 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:45Z","lastTransitionTime":"2026-01-26T20:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.240886 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.256393 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.269458 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.287065 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.287717 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.304500 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.321277 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.332376 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.341395 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.341425 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.341435 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.341449 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.341458 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:45Z","lastTransitionTime":"2026-01-26T20:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.349868 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dbbb5e2e0c10eb35cb096e8e7409f162c15623fcae99a0127002b54a3fd4bfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:55:43Z\\\",\\\"message\\\":\\\"ing reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 20:55:43.861495 6237 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 20:55:43.861814 6237 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 20:55:43.862424 6237 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 20:55:43.862495 6237 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 20:55:43.862520 6237 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 20:55:43.862530 6237 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 20:55:43.862557 6237 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 20:55:43.862568 6237 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 20:55:43.862570 6237 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 20:55:43.862603 6237 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 20:55:43.863106 6237 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 20:55:43.863161 6237 factory.go:656] Stopping watch factory\\\\nI0126 20:55:43.863182 6237 ovnkube.go:599] Stopped ovnkube\\\\nI0126 20:55:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.362514 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.374340 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.387413 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.399632 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.414851 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.431306 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.443878 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.443974 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.444001 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.444032 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.444050 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:45Z","lastTransitionTime":"2026-01-26T20:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.546117 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.546160 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.546172 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.546187 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.546201 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:45Z","lastTransitionTime":"2026-01-26T20:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.648244 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.648574 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.648714 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.648910 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.649092 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:45Z","lastTransitionTime":"2026-01-26T20:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.752131 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.752204 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.752231 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.752248 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.752259 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:45Z","lastTransitionTime":"2026-01-26T20:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.855494 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.855588 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.855606 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.855632 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.855649 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:45Z","lastTransitionTime":"2026-01-26T20:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.894703 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 17:28:38.491903831 +0000 UTC Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.959259 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.959325 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.959343 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.959367 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:45 crc kubenswrapper[4899]: I0126 20:55:45.959384 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:45Z","lastTransitionTime":"2026-01-26T20:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.061612 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.061695 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.061720 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.061750 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.061772 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:46Z","lastTransitionTime":"2026-01-26T20:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.166152 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.166422 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.166439 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.166459 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.166480 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:46Z","lastTransitionTime":"2026-01-26T20:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.269355 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.269407 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.269421 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.269438 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.269450 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:46Z","lastTransitionTime":"2026-01-26T20:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.371461 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.371506 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.371516 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.371532 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.371542 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:46Z","lastTransitionTime":"2026-01-26T20:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.444628 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5s8xd"] Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.445366 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.445605 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.464597 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:46Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.473789 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.473834 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.473850 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.473871 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.473890 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:46Z","lastTransitionTime":"2026-01-26T20:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.484418 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:46Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.500697 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:46Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.521583 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:46Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.538711 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:46Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.551153 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:46Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.563693 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:46Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.574264 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:46Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.576097 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.576143 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.576154 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.576173 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.576186 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:46Z","lastTransitionTime":"2026-01-26T20:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.582717 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbwzr\" (UniqueName: \"kubernetes.io/projected/88f49476-befa-4689-91cb-c0a8cc1def3d-kube-api-access-gbwzr\") pod \"network-metrics-daemon-5s8xd\" (UID: \"88f49476-befa-4689-91cb-c0a8cc1def3d\") " pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.582764 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs\") pod \"network-metrics-daemon-5s8xd\" (UID: \"88f49476-befa-4689-91cb-c0a8cc1def3d\") " pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.591709 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dbbb5e2e0c10eb35cb096e8e7409f162c15623fcae99a0127002b54a3fd4bfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:55:43Z\\\",\\\"message\\\":\\\"ing reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 20:55:43.861495 6237 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 20:55:43.861814 6237 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 20:55:43.862424 6237 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 20:55:43.862495 6237 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 20:55:43.862520 6237 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 20:55:43.862530 6237 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 20:55:43.862557 6237 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 20:55:43.862568 6237 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 20:55:43.862570 6237 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 20:55:43.862603 6237 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 20:55:43.863106 6237 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 20:55:43.863161 6237 factory.go:656] Stopping watch factory\\\\nI0126 20:55:43.863182 6237 ovnkube.go:599] Stopped ovnkube\\\\nI0126 20:55:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:46Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.607321 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:46Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.619822 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:46Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.634068 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:46Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.648828 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:46Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.666679 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:46Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.678662 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.678701 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.678710 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.678724 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.678734 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:46Z","lastTransitionTime":"2026-01-26T20:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.684178 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.684375 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:56:02.684354778 +0000 UTC m=+52.065942815 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.684424 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbwzr\" (UniqueName: \"kubernetes.io/projected/88f49476-befa-4689-91cb-c0a8cc1def3d-kube-api-access-gbwzr\") pod \"network-metrics-daemon-5s8xd\" (UID: \"88f49476-befa-4689-91cb-c0a8cc1def3d\") " pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.684456 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs\") pod \"network-metrics-daemon-5s8xd\" (UID: \"88f49476-befa-4689-91cb-c0a8cc1def3d\") " pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.684616 4899 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.684666 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs podName:88f49476-befa-4689-91cb-c0a8cc1def3d nodeName:}" failed. No retries permitted until 2026-01-26 20:55:47.184655627 +0000 UTC m=+36.566243664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs") pod "network-metrics-daemon-5s8xd" (UID: "88f49476-befa-4689-91cb-c0a8cc1def3d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.684847 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:46Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.696198 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:46Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.702946 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbwzr\" (UniqueName: \"kubernetes.io/projected/88f49476-befa-4689-91cb-c0a8cc1def3d-kube-api-access-gbwzr\") pod \"network-metrics-daemon-5s8xd\" (UID: \"88f49476-befa-4689-91cb-c0a8cc1def3d\") " pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.781754 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.781797 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.781809 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.781828 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.781841 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:46Z","lastTransitionTime":"2026-01-26T20:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.785205 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.785254 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.785291 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.785336 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.785396 4899 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.785416 4899 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.785465 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 20:56:02.785445856 +0000 UTC m=+52.167033913 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.785468 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.785487 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 20:56:02.785478807 +0000 UTC m=+52.167066864 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.785492 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.785506 4899 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.785522 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.785562 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 20:56:02.785544909 +0000 UTC m=+52.167132946 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.785569 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.785587 4899 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.785663 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 20:56:02.785644891 +0000 UTC m=+52.167232948 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.884366 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.884422 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.884438 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.884457 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.884470 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:46Z","lastTransitionTime":"2026-01-26T20:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.894870 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 06:28:33.576296737 +0000 UTC Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.930860 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.930920 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.931012 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.931097 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.931187 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:55:46 crc kubenswrapper[4899]: E0126 20:55:46.931254 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.987039 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.987063 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.987071 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.987086 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:46 crc kubenswrapper[4899]: I0126 20:55:46.987094 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:46Z","lastTransitionTime":"2026-01-26T20:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.089314 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.089365 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.089382 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.089405 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.089423 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:47Z","lastTransitionTime":"2026-01-26T20:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.189130 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs\") pod \"network-metrics-daemon-5s8xd\" (UID: \"88f49476-befa-4689-91cb-c0a8cc1def3d\") " pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:47 crc kubenswrapper[4899]: E0126 20:55:47.189286 4899 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 20:55:47 crc kubenswrapper[4899]: E0126 20:55:47.189361 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs podName:88f49476-befa-4689-91cb-c0a8cc1def3d nodeName:}" failed. No retries permitted until 2026-01-26 20:55:48.189348433 +0000 UTC m=+37.570936470 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs") pod "network-metrics-daemon-5s8xd" (UID: "88f49476-befa-4689-91cb-c0a8cc1def3d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.191420 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.191452 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.191462 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.191477 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.191487 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:47Z","lastTransitionTime":"2026-01-26T20:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.221631 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" event={"ID":"2285c985-da54-4035-b72d-06f9c067f463","Type":"ContainerStarted","Data":"6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc"} Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.221697 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" event={"ID":"2285c985-da54-4035-b72d-06f9c067f463","Type":"ContainerStarted","Data":"e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4"} Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.221720 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" event={"ID":"2285c985-da54-4035-b72d-06f9c067f463","Type":"ContainerStarted","Data":"8b63db164bf57a416827e3bdced98aa247ed4949d7f4cfeabdbe14fed7b8e9d9"} Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.227744 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovnkube-controller/1.log" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.228818 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovnkube-controller/0.log" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.232986 4899 generic.go:334] "Generic (PLEG): container finished" podID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerID="5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c" exitCode=1 Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.233020 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerDied","Data":"5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c"} Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.233067 4899 scope.go:117] "RemoveContainer" containerID="9dbbb5e2e0c10eb35cb096e8e7409f162c15623fcae99a0127002b54a3fd4bfe" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.234239 4899 scope.go:117] "RemoveContainer" containerID="5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c" Jan 26 20:55:47 crc kubenswrapper[4899]: E0126 20:55:47.234611 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.245059 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.260820 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.272177 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.285797 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.293772 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.293951 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.294051 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.294133 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.294236 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:47Z","lastTransitionTime":"2026-01-26T20:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.302658 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.316726 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.330046 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.341324 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.360236 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dbbb5e2e0c10eb35cb096e8e7409f162c15623fcae99a0127002b54a3fd4bfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:55:43Z\\\",\\\"message\\\":\\\"ing reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 20:55:43.861495 6237 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 20:55:43.861814 6237 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 20:55:43.862424 6237 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 20:55:43.862495 6237 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 20:55:43.862520 6237 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 20:55:43.862530 6237 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 20:55:43.862557 6237 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 20:55:43.862568 6237 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 20:55:43.862570 6237 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 20:55:43.862603 6237 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 20:55:43.863106 6237 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 20:55:43.863161 6237 factory.go:656] Stopping watch factory\\\\nI0126 20:55:43.863182 6237 ovnkube.go:599] Stopped ovnkube\\\\nI0126 20:55:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.377307 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.391284 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.396383 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.396422 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.396434 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.396450 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.396461 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:47Z","lastTransitionTime":"2026-01-26T20:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.441355 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.458224 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.475367 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.493259 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.498584 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.498621 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.498637 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.498658 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.498675 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:47Z","lastTransitionTime":"2026-01-26T20:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.508795 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.525656 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.541227 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.554764 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.573427 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.591377 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.601570 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.601605 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.601616 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.601632 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.601647 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:47Z","lastTransitionTime":"2026-01-26T20:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.611682 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.625201 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.640799 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.654574 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.665278 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.679662 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.693298 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.705509 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.705541 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.705551 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.705569 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.705584 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:47Z","lastTransitionTime":"2026-01-26T20:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.708193 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.720127 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.730232 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.747165 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dbbb5e2e0c10eb35cb096e8e7409f162c15623fcae99a0127002b54a3fd4bfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:55:43Z\\\",\\\"message\\\":\\\"ing reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 20:55:43.861495 6237 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 20:55:43.861814 6237 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 20:55:43.862424 6237 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 20:55:43.862495 6237 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 20:55:43.862520 6237 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 20:55:43.862530 6237 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 20:55:43.862557 6237 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 20:55:43.862568 6237 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 20:55:43.862570 6237 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 20:55:43.862603 6237 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 20:55:43.863106 6237 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 20:55:43.863161 6237 factory.go:656] Stopping watch factory\\\\nI0126 20:55:43.863182 6237 ovnkube.go:599] Stopped ovnkube\\\\nI0126 20:55:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"pin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:55:44.956819 6379 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nF0126 20:55:44.956617 6379 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z]\\\\nI0126 20:55:44.956824\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:47Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.807982 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.808072 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.808100 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.808138 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.808176 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:47Z","lastTransitionTime":"2026-01-26T20:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.895797 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 16:04:02.479566134 +0000 UTC Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.911176 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.911239 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.911259 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.911285 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.911303 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:47Z","lastTransitionTime":"2026-01-26T20:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:47 crc kubenswrapper[4899]: I0126 20:55:47.930397 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:47 crc kubenswrapper[4899]: E0126 20:55:47.930503 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.014059 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.014128 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.014142 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.014160 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.014174 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:48Z","lastTransitionTime":"2026-01-26T20:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.117200 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.117247 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.117256 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.117272 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.117281 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:48Z","lastTransitionTime":"2026-01-26T20:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.200853 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs\") pod \"network-metrics-daemon-5s8xd\" (UID: \"88f49476-befa-4689-91cb-c0a8cc1def3d\") " pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:48 crc kubenswrapper[4899]: E0126 20:55:48.201039 4899 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 20:55:48 crc kubenswrapper[4899]: E0126 20:55:48.201125 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs podName:88f49476-befa-4689-91cb-c0a8cc1def3d nodeName:}" failed. No retries permitted until 2026-01-26 20:55:50.201101311 +0000 UTC m=+39.582689368 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs") pod "network-metrics-daemon-5s8xd" (UID: "88f49476-befa-4689-91cb-c0a8cc1def3d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.220036 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.220071 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.220081 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.220095 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.220106 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:48Z","lastTransitionTime":"2026-01-26T20:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.237583 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovnkube-controller/1.log" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.240428 4899 scope.go:117] "RemoveContainer" containerID="5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c" Jan 26 20:55:48 crc kubenswrapper[4899]: E0126 20:55:48.240599 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.257193 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.273412 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.286086 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.297959 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.308793 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.322609 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.322647 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.322660 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.322677 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.322690 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:48Z","lastTransitionTime":"2026-01-26T20:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.330679 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"pin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:55:44.956819 6379 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nF0126 20:55:44.956617 6379 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z]\\\\nI0126 20:55:44.956824\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.343727 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.357817 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.367352 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.377437 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.384987 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.388126 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.397743 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.409715 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.422292 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.425142 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.425206 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.425218 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.425249 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.425261 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:48Z","lastTransitionTime":"2026-01-26T20:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.440072 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.452846 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.466776 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.480105 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.493716 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.502993 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.514633 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.528214 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.528244 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.528254 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.528268 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.528278 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:48Z","lastTransitionTime":"2026-01-26T20:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.533362 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"pin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:55:44.956819 6379 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nF0126 20:55:44.956617 6379 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z]\\\\nI0126 20:55:44.956824\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.544087 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.553030 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.562987 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.572842 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.583340 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.597590 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.607444 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.624418 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.630387 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.630433 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.630444 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.630458 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.630468 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:48Z","lastTransitionTime":"2026-01-26T20:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.637108 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.648699 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:48Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.732605 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.732644 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.732655 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.732672 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.732682 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:48Z","lastTransitionTime":"2026-01-26T20:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.834808 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.834845 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.834857 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.834873 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.834885 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:48Z","lastTransitionTime":"2026-01-26T20:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.896477 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:32:59.637471845 +0000 UTC Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.930202 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.930217 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.930217 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:48 crc kubenswrapper[4899]: E0126 20:55:48.930294 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:55:48 crc kubenswrapper[4899]: E0126 20:55:48.930514 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:55:48 crc kubenswrapper[4899]: E0126 20:55:48.930706 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.936778 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.936847 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.936870 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.936897 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:48 crc kubenswrapper[4899]: I0126 20:55:48.936922 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:48Z","lastTransitionTime":"2026-01-26T20:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.040414 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.040497 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.040556 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.040589 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.040614 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:49Z","lastTransitionTime":"2026-01-26T20:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.143466 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.143527 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.143545 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.143573 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.143591 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:49Z","lastTransitionTime":"2026-01-26T20:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.246485 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.246591 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.246665 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.246692 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.246712 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:49Z","lastTransitionTime":"2026-01-26T20:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.349615 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.349672 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.349692 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.349711 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.349724 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:49Z","lastTransitionTime":"2026-01-26T20:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.453415 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.453499 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.453516 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.453538 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.453555 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:49Z","lastTransitionTime":"2026-01-26T20:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.557348 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.557450 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.557468 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.557494 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.557510 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:49Z","lastTransitionTime":"2026-01-26T20:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.660841 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.661000 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.661026 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.661055 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.661077 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:49Z","lastTransitionTime":"2026-01-26T20:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.763875 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.763921 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.763960 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.763977 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.763990 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:49Z","lastTransitionTime":"2026-01-26T20:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.866658 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.866718 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.866738 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.866760 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.866775 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:49Z","lastTransitionTime":"2026-01-26T20:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.897467 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:32:22.804911332 +0000 UTC Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.930130 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:49 crc kubenswrapper[4899]: E0126 20:55:49.930407 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.969685 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.969733 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.969750 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.969771 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:49 crc kubenswrapper[4899]: I0126 20:55:49.969786 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:49Z","lastTransitionTime":"2026-01-26T20:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.072994 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.073044 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.073060 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.073084 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.073098 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:50Z","lastTransitionTime":"2026-01-26T20:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.175274 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.175305 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.175315 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.175329 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.175342 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:50Z","lastTransitionTime":"2026-01-26T20:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.224119 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs\") pod \"network-metrics-daemon-5s8xd\" (UID: \"88f49476-befa-4689-91cb-c0a8cc1def3d\") " pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:50 crc kubenswrapper[4899]: E0126 20:55:50.224300 4899 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 20:55:50 crc kubenswrapper[4899]: E0126 20:55:50.224371 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs podName:88f49476-befa-4689-91cb-c0a8cc1def3d nodeName:}" failed. No retries permitted until 2026-01-26 20:55:54.224353182 +0000 UTC m=+43.605941219 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs") pod "network-metrics-daemon-5s8xd" (UID: "88f49476-befa-4689-91cb-c0a8cc1def3d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.277624 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.277683 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.277692 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.277706 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.277717 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:50Z","lastTransitionTime":"2026-01-26T20:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.380134 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.380174 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.380265 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.380283 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.380295 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:50Z","lastTransitionTime":"2026-01-26T20:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.483218 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.483258 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.483269 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.483288 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.483300 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:50Z","lastTransitionTime":"2026-01-26T20:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.585469 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.585507 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.585519 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.585535 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.585549 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:50Z","lastTransitionTime":"2026-01-26T20:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.688074 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.688155 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.688178 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.688210 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.688239 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:50Z","lastTransitionTime":"2026-01-26T20:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.790593 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.790662 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.790673 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.790720 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.790733 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:50Z","lastTransitionTime":"2026-01-26T20:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.893517 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.893560 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.893570 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.893584 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.893595 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:50Z","lastTransitionTime":"2026-01-26T20:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.897667 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 16:37:12.062071566 +0000 UTC Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.930180 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.930236 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:50 crc kubenswrapper[4899]: E0126 20:55:50.930370 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.930412 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:50 crc kubenswrapper[4899]: E0126 20:55:50.930546 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:55:50 crc kubenswrapper[4899]: E0126 20:55:50.930622 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.951854 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.972351 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.997763 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.997668 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.997885 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.997983 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.998022 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:50 crc kubenswrapper[4899]: I0126 20:55:50.998046 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:50Z","lastTransitionTime":"2026-01-26T20:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.014217 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.036139 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.057463 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.075635 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.094201 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.100137 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.100193 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.100210 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.100237 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.100255 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:51Z","lastTransitionTime":"2026-01-26T20:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.114848 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.128282 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.140807 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.159513 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.190386 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"pin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:55:44.956819 6379 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nF0126 20:55:44.956617 6379 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z]\\\\nI0126 20:55:44.956824\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.204394 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.204463 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.204481 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.204510 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.204529 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:51Z","lastTransitionTime":"2026-01-26T20:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.208910 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.228743 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.243140 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.308416 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.308450 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.308459 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.308473 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.308485 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:51Z","lastTransitionTime":"2026-01-26T20:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.417851 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.417914 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.417948 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.417967 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.417981 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:51Z","lastTransitionTime":"2026-01-26T20:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.520528 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.520606 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.520627 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.521115 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.521182 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:51Z","lastTransitionTime":"2026-01-26T20:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.625338 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.625393 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.625511 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.625547 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.625566 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:51Z","lastTransitionTime":"2026-01-26T20:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.734397 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.734488 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.734500 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.734514 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.734523 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:51Z","lastTransitionTime":"2026-01-26T20:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.837884 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.837966 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.837977 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.837991 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.838000 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:51Z","lastTransitionTime":"2026-01-26T20:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.898202 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:20:40.278619659 +0000 UTC Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.930677 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:51 crc kubenswrapper[4899]: E0126 20:55:51.930888 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.942433 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.942521 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.942548 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.942584 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:51 crc kubenswrapper[4899]: I0126 20:55:51.942613 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:51Z","lastTransitionTime":"2026-01-26T20:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.045909 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.046006 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.046024 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.046052 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.046072 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:52Z","lastTransitionTime":"2026-01-26T20:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.149594 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.149696 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.149722 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.149753 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.149777 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:52Z","lastTransitionTime":"2026-01-26T20:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.251966 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.252020 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.252033 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.252051 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.252064 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:52Z","lastTransitionTime":"2026-01-26T20:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.354246 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.354314 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.354338 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.354365 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.354387 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:52Z","lastTransitionTime":"2026-01-26T20:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.457632 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.457702 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.457727 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.457757 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.457778 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:52Z","lastTransitionTime":"2026-01-26T20:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.560747 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.560807 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.560821 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.560843 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.560857 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:52Z","lastTransitionTime":"2026-01-26T20:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.665152 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.665233 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.665251 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.665282 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.665309 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:52Z","lastTransitionTime":"2026-01-26T20:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.769116 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.769198 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.769220 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.769257 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.769282 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:52Z","lastTransitionTime":"2026-01-26T20:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.872738 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.872806 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.872827 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.872858 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.872880 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:52Z","lastTransitionTime":"2026-01-26T20:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.898425 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:53:55.814292256 +0000 UTC Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.929920 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.930047 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.929975 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:52 crc kubenswrapper[4899]: E0126 20:55:52.930206 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:55:52 crc kubenswrapper[4899]: E0126 20:55:52.930426 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:55:52 crc kubenswrapper[4899]: E0126 20:55:52.930496 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.976621 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.976689 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.976705 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.976726 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:52 crc kubenswrapper[4899]: I0126 20:55:52.976739 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:52Z","lastTransitionTime":"2026-01-26T20:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.080065 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.080124 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.080143 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.080169 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.080187 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:53Z","lastTransitionTime":"2026-01-26T20:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.183443 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.183505 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.183523 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.183549 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.183568 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:53Z","lastTransitionTime":"2026-01-26T20:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.286306 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.286358 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.286373 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.286392 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.286406 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:53Z","lastTransitionTime":"2026-01-26T20:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.388974 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.389010 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.389023 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.389043 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.389054 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:53Z","lastTransitionTime":"2026-01-26T20:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.492323 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.492377 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.492392 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.492410 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.492426 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:53Z","lastTransitionTime":"2026-01-26T20:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.595793 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.595846 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.595860 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.595885 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.595901 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:53Z","lastTransitionTime":"2026-01-26T20:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.699051 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.699404 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.699508 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.699644 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.699772 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:53Z","lastTransitionTime":"2026-01-26T20:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.803036 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.803143 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.803158 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.803176 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.803377 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:53Z","lastTransitionTime":"2026-01-26T20:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.899192 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:29:47.994829276 +0000 UTC Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.905996 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.906155 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.906256 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.906364 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.906480 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:53Z","lastTransitionTime":"2026-01-26T20:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:53 crc kubenswrapper[4899]: I0126 20:55:53.930597 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:53 crc kubenswrapper[4899]: E0126 20:55:53.931124 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.009114 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.009163 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.009176 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.009194 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.009207 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:54Z","lastTransitionTime":"2026-01-26T20:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.112022 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.112105 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.112118 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.112131 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.112159 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:54Z","lastTransitionTime":"2026-01-26T20:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.215051 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.215091 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.215100 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.215116 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.215126 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:54Z","lastTransitionTime":"2026-01-26T20:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.271129 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs\") pod \"network-metrics-daemon-5s8xd\" (UID: \"88f49476-befa-4689-91cb-c0a8cc1def3d\") " pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:54 crc kubenswrapper[4899]: E0126 20:55:54.271532 4899 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 20:55:54 crc kubenswrapper[4899]: E0126 20:55:54.271661 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs podName:88f49476-befa-4689-91cb-c0a8cc1def3d nodeName:}" failed. No retries permitted until 2026-01-26 20:56:02.271643134 +0000 UTC m=+51.653231171 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs") pod "network-metrics-daemon-5s8xd" (UID: "88f49476-befa-4689-91cb-c0a8cc1def3d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.318205 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.318260 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.318272 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.318296 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.318310 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:54Z","lastTransitionTime":"2026-01-26T20:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.380791 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.380866 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.380901 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.380974 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.381031 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:54Z","lastTransitionTime":"2026-01-26T20:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:54 crc kubenswrapper[4899]: E0126 20:55:54.404129 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.412436 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.412765 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.412996 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.413257 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.413634 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:54Z","lastTransitionTime":"2026-01-26T20:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:54 crc kubenswrapper[4899]: E0126 20:55:54.431357 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.438248 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.438436 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.438592 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.438750 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.439002 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:54Z","lastTransitionTime":"2026-01-26T20:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:54 crc kubenswrapper[4899]: E0126 20:55:54.461477 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.467553 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.467757 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.467884 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.468035 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.468180 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:54Z","lastTransitionTime":"2026-01-26T20:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:54 crc kubenswrapper[4899]: E0126 20:55:54.488687 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.494432 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.494476 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.494486 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.494503 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.494514 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:54Z","lastTransitionTime":"2026-01-26T20:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:54 crc kubenswrapper[4899]: E0126 20:55:54.513479 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 20:55:54 crc kubenswrapper[4899]: E0126 20:55:54.513605 4899 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.515790 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.515825 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.515835 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.515850 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.515859 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:54Z","lastTransitionTime":"2026-01-26T20:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.618840 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.618962 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.618992 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.619030 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.619056 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:54Z","lastTransitionTime":"2026-01-26T20:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.722909 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.722990 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.723004 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.723030 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.723049 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:54Z","lastTransitionTime":"2026-01-26T20:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.826771 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.827349 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.827496 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.827644 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.827777 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:54Z","lastTransitionTime":"2026-01-26T20:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.899667 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:07:57.861123863 +0000 UTC Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.929977 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.930052 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:54 crc kubenswrapper[4899]: E0126 20:55:54.930170 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.930326 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:54 crc kubenswrapper[4899]: E0126 20:55:54.930483 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:55:54 crc kubenswrapper[4899]: E0126 20:55:54.930577 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.933886 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.933924 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.933951 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.933969 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:54 crc kubenswrapper[4899]: I0126 20:55:54.933984 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:54Z","lastTransitionTime":"2026-01-26T20:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.036626 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.036728 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.036749 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.036783 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.036817 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:55Z","lastTransitionTime":"2026-01-26T20:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.139462 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.139498 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.139509 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.139524 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.139533 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:55Z","lastTransitionTime":"2026-01-26T20:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.244200 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.244261 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.244273 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.244291 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.244307 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:55Z","lastTransitionTime":"2026-01-26T20:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.347221 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.347302 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.347319 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.347337 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.347350 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:55Z","lastTransitionTime":"2026-01-26T20:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.449865 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.449920 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.449950 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.449963 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.449973 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:55Z","lastTransitionTime":"2026-01-26T20:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.553230 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.553318 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.553345 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.553377 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.553400 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:55Z","lastTransitionTime":"2026-01-26T20:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.656305 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.656348 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.656367 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.656382 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.656393 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:55Z","lastTransitionTime":"2026-01-26T20:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.758846 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.758897 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.758913 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.758960 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.758974 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:55Z","lastTransitionTime":"2026-01-26T20:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.862308 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.862362 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.862377 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.862399 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.862415 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:55Z","lastTransitionTime":"2026-01-26T20:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.900089 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 06:14:07.083312869 +0000 UTC Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.929630 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:55 crc kubenswrapper[4899]: E0126 20:55:55.929874 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.965869 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.965914 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.965959 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.965978 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:55 crc kubenswrapper[4899]: I0126 20:55:55.965993 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:55Z","lastTransitionTime":"2026-01-26T20:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.068832 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.068885 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.068907 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.068973 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.069018 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:56Z","lastTransitionTime":"2026-01-26T20:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.172066 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.172118 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.172136 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.172159 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.172177 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:56Z","lastTransitionTime":"2026-01-26T20:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.274501 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.274553 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.274570 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.274595 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.274612 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:56Z","lastTransitionTime":"2026-01-26T20:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.378225 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.378592 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.378816 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.379072 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.379267 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:56Z","lastTransitionTime":"2026-01-26T20:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.482055 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.482095 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.482105 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.482117 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.482126 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:56Z","lastTransitionTime":"2026-01-26T20:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.584575 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.584613 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.584621 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.584636 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.584647 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:56Z","lastTransitionTime":"2026-01-26T20:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.688126 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.688161 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.688170 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.688186 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.688196 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:56Z","lastTransitionTime":"2026-01-26T20:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.790969 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.791006 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.791015 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.791030 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.791038 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:56Z","lastTransitionTime":"2026-01-26T20:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.893728 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.893780 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.893796 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.893812 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.893821 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:56Z","lastTransitionTime":"2026-01-26T20:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.900400 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 01:08:11.961615287 +0000 UTC Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.930120 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.930238 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:56 crc kubenswrapper[4899]: E0126 20:55:56.930400 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.930421 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:56 crc kubenswrapper[4899]: E0126 20:55:56.930486 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:55:56 crc kubenswrapper[4899]: E0126 20:55:56.930571 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.996135 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.996461 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.996583 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.996673 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:56 crc kubenswrapper[4899]: I0126 20:55:56.996757 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:56Z","lastTransitionTime":"2026-01-26T20:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.099286 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.099553 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.099617 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.099689 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.099747 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:57Z","lastTransitionTime":"2026-01-26T20:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.202300 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.202574 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.202713 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.202779 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.202844 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:57Z","lastTransitionTime":"2026-01-26T20:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.306391 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.306796 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.307049 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.307215 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.307334 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:57Z","lastTransitionTime":"2026-01-26T20:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.410402 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.410463 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.410486 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.410517 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.410543 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:57Z","lastTransitionTime":"2026-01-26T20:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.513435 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.513895 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.514195 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.514409 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.514628 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:57Z","lastTransitionTime":"2026-01-26T20:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.618268 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.618328 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.618346 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.618376 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.618394 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:57Z","lastTransitionTime":"2026-01-26T20:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.721150 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.721496 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.721689 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.721890 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.722141 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:57Z","lastTransitionTime":"2026-01-26T20:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.824356 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.824423 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.824491 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.824522 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.824542 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:57Z","lastTransitionTime":"2026-01-26T20:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.901595 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 02:00:33.164574638 +0000 UTC Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.927394 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.927469 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.927490 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.927520 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.927543 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:57Z","lastTransitionTime":"2026-01-26T20:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:57 crc kubenswrapper[4899]: I0126 20:55:57.929592 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:57 crc kubenswrapper[4899]: E0126 20:55:57.929742 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.029899 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.029979 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.029992 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.030009 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.030020 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:58Z","lastTransitionTime":"2026-01-26T20:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.131984 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.132041 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.132057 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.132080 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.132096 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:58Z","lastTransitionTime":"2026-01-26T20:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.235113 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.235165 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.235177 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.235195 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.235208 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:58Z","lastTransitionTime":"2026-01-26T20:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.338162 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.338219 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.338231 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.338250 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.338262 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:58Z","lastTransitionTime":"2026-01-26T20:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.440793 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.440840 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.440856 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.440877 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.440893 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:58Z","lastTransitionTime":"2026-01-26T20:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.543028 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.543072 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.543083 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.543104 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.543117 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:58Z","lastTransitionTime":"2026-01-26T20:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.646336 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.646402 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.646431 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.646462 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.646499 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:58Z","lastTransitionTime":"2026-01-26T20:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.750391 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.750433 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.750442 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.750457 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.750466 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:58Z","lastTransitionTime":"2026-01-26T20:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.853802 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.853847 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.853862 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.853878 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.853890 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:58Z","lastTransitionTime":"2026-01-26T20:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.901968 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:34:18.660108744 +0000 UTC Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.930676 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.930676 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.930825 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:55:58 crc kubenswrapper[4899]: E0126 20:55:58.930859 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:55:58 crc kubenswrapper[4899]: E0126 20:55:58.931020 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:55:58 crc kubenswrapper[4899]: E0126 20:55:58.931116 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.956401 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.956434 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.956442 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.956455 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:58 crc kubenswrapper[4899]: I0126 20:55:58.956464 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:58Z","lastTransitionTime":"2026-01-26T20:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.059396 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.059429 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.059437 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.059452 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.059463 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:59Z","lastTransitionTime":"2026-01-26T20:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.162429 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.162485 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.162507 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.162531 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.162550 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:59Z","lastTransitionTime":"2026-01-26T20:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.264881 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.264961 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.264978 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.265001 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.265018 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:59Z","lastTransitionTime":"2026-01-26T20:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.367103 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.367147 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.367159 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.367176 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.367198 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:59Z","lastTransitionTime":"2026-01-26T20:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.469400 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.469623 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.469709 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.469793 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.469877 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:59Z","lastTransitionTime":"2026-01-26T20:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.572280 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.572324 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.572335 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.572355 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.572366 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:59Z","lastTransitionTime":"2026-01-26T20:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.675572 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.675635 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.675658 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.675684 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.675706 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:59Z","lastTransitionTime":"2026-01-26T20:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.778966 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.779326 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.779464 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.779586 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.779707 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:59Z","lastTransitionTime":"2026-01-26T20:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.882079 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.882161 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.882178 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.882204 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.882221 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:59Z","lastTransitionTime":"2026-01-26T20:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.902516 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:39:38.13571692 +0000 UTC Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.930097 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:55:59 crc kubenswrapper[4899]: E0126 20:55:59.930339 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.984428 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.984472 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.984483 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.984498 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:55:59 crc kubenswrapper[4899]: I0126 20:55:59.984510 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:55:59Z","lastTransitionTime":"2026-01-26T20:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.027577 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.058253 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.066521 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.079476 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.086591 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.086649 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.086662 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.086681 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.086694 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:00Z","lastTransitionTime":"2026-01-26T20:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.095591 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.111763 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.125879 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.137120 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.150415 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.163505 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.174872 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.184939 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.188436 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.188454 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.188465 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.188478 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.188487 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:00Z","lastTransitionTime":"2026-01-26T20:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.193567 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.211018 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"pin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:55:44.956819 6379 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nF0126 20:55:44.956617 6379 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z]\\\\nI0126 20:55:44.956824\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.227081 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.238589 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.251901 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.267508 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.290358 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.290775 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.291036 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.291282 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.291590 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:00Z","lastTransitionTime":"2026-01-26T20:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.394155 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.394403 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.394475 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.394540 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.394599 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:00Z","lastTransitionTime":"2026-01-26T20:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.496796 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.497038 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.497178 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.497245 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.497305 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:00Z","lastTransitionTime":"2026-01-26T20:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.601503 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.601593 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.601650 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.601675 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.601692 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:00Z","lastTransitionTime":"2026-01-26T20:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.704813 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.705171 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.705263 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.705348 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.705511 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:00Z","lastTransitionTime":"2026-01-26T20:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.809019 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.809087 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.809110 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.809134 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.809155 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:00Z","lastTransitionTime":"2026-01-26T20:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.903543 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:01:33.732280528 +0000 UTC Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.911469 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.911496 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.911507 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.911523 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.911535 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:00Z","lastTransitionTime":"2026-01-26T20:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.931184 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:00 crc kubenswrapper[4899]: E0126 20:56:00.931351 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.931418 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:00 crc kubenswrapper[4899]: E0126 20:56:00.931567 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.931579 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:00 crc kubenswrapper[4899]: E0126 20:56:00.931693 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.952225 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.965443 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.983124 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:00 crc kubenswrapper[4899]: I0126 20:56:00.997992 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:00Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.012251 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.013578 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.013614 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.013629 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.013649 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.013664 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:01Z","lastTransitionTime":"2026-01-26T20:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.028587 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.042123 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.064789 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"pin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:55:44.956819 6379 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nF0126 20:55:44.956617 6379 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z]\\\\nI0126 20:55:44.956824\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.080895 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f199371a-c546-4a47-b96b-be3768b02b36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6337a3488685aada7e51eb60802f0e97119ef23a01e08db83d1950da3b2755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac177559351b6d0279c6e1a62bb0bcf85ccbd161b8ac29079d7b463b964402b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ec6ab43e988842fc52ebf3b5f8e16b54cceabf1cadbde4641af6ee3c2fe837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.098604 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.113834 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.116008 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.116050 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.116066 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.116087 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.116102 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:01Z","lastTransitionTime":"2026-01-26T20:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.126804 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.140007 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.162228 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.176280 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.192436 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.209420 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.219985 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.220043 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.220069 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.220098 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.220116 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:01Z","lastTransitionTime":"2026-01-26T20:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.322860 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.322919 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.322960 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.322976 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.322985 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:01Z","lastTransitionTime":"2026-01-26T20:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.426060 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.426097 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.426106 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.426121 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.426129 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:01Z","lastTransitionTime":"2026-01-26T20:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.528981 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.529037 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.529053 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.529074 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.529089 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:01Z","lastTransitionTime":"2026-01-26T20:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.631791 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.631847 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.631864 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.631886 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.631901 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:01Z","lastTransitionTime":"2026-01-26T20:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.734972 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.735027 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.735044 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.735472 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.735518 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:01Z","lastTransitionTime":"2026-01-26T20:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.838225 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.838288 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.838305 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.838331 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.838352 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:01Z","lastTransitionTime":"2026-01-26T20:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.903911 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:32:36.500434686 +0000 UTC Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.930334 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:01 crc kubenswrapper[4899]: E0126 20:56:01.930534 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.940861 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.940909 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.940977 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.941003 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:01 crc kubenswrapper[4899]: I0126 20:56:01.941038 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:01Z","lastTransitionTime":"2026-01-26T20:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.043714 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.043854 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.043877 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.043908 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.043963 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:02Z","lastTransitionTime":"2026-01-26T20:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.146690 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.146807 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.146834 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.146867 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.146891 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:02Z","lastTransitionTime":"2026-01-26T20:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.249921 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.250011 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.250029 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.250048 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.250061 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:02Z","lastTransitionTime":"2026-01-26T20:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.352227 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.352275 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.352289 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.352310 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.352325 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:02Z","lastTransitionTime":"2026-01-26T20:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.360225 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs\") pod \"network-metrics-daemon-5s8xd\" (UID: \"88f49476-befa-4689-91cb-c0a8cc1def3d\") " pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.360341 4899 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.360401 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs podName:88f49476-befa-4689-91cb-c0a8cc1def3d nodeName:}" failed. No retries permitted until 2026-01-26 20:56:18.360385134 +0000 UTC m=+67.741973171 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs") pod "network-metrics-daemon-5s8xd" (UID: "88f49476-befa-4689-91cb-c0a8cc1def3d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.455491 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.455544 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.455561 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.455586 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.455604 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:02Z","lastTransitionTime":"2026-01-26T20:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.558252 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.558328 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.558350 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.558396 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.558422 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:02Z","lastTransitionTime":"2026-01-26T20:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.661021 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.661057 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.661065 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.661078 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.661086 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:02Z","lastTransitionTime":"2026-01-26T20:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.763241 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.763502 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:56:34.763399016 +0000 UTC m=+84.144987093 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.764581 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.764650 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.764673 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.764705 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.764729 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:02Z","lastTransitionTime":"2026-01-26T20:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.864251 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.864333 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.864376 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.864420 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.864468 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.864494 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.864523 4899 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.864539 4899 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.864554 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.864582 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.864594 4899 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.864595 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 20:56:34.864571755 +0000 UTC m=+84.246159842 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.864527 4899 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.864630 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 20:56:34.864612827 +0000 UTC m=+84.246200904 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.864661 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 20:56:34.864646328 +0000 UTC m=+84.246234455 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.864695 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 20:56:34.864677438 +0000 UTC m=+84.246265565 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.866599 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.866622 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.866650 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.866663 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.866672 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:02Z","lastTransitionTime":"2026-01-26T20:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.904796 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:19:12.606120439 +0000 UTC Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.930651 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.930723 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.930667 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.930971 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.931543 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:02 crc kubenswrapper[4899]: E0126 20:56:02.931735 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.931962 4899 scope.go:117] "RemoveContainer" containerID="5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.969904 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.970236 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.970261 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.970292 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:02 crc kubenswrapper[4899]: I0126 20:56:02.970315 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:02Z","lastTransitionTime":"2026-01-26T20:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.073619 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.073671 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.073694 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.073722 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.073746 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:03Z","lastTransitionTime":"2026-01-26T20:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.176359 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.176385 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.176393 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.176406 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.176414 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:03Z","lastTransitionTime":"2026-01-26T20:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.281608 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.281667 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.281683 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.281703 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.281720 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:03Z","lastTransitionTime":"2026-01-26T20:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.292444 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovnkube-controller/1.log" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.296028 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerStarted","Data":"96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64"} Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.296418 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.322300 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f199371a-c546-4a47-b96b-be3768b02b36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6337a3488685aada7e51eb60802f0e97119ef23a01e08db83d1950da3b2755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac177559351b6d0279c6e1a62bb0bcf85ccbd161b8ac29079d7b463b964402b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ec6ab43e988842fc52ebf3b5f8e16b54cceabf1cadbde4641af6ee3c2fe837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.347430 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.366267 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.406107 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.406144 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.406155 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.406190 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.406203 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:03Z","lastTransitionTime":"2026-01-26T20:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.419005 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.432230 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.442397 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.459992 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"pin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:55:44.956819 6379 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nF0126 20:55:44.956617 6379 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z]\\\\nI0126 20:55:44.956824\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.472982 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.483800 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.499098 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.507678 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.507712 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.507723 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.507738 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.507750 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:03Z","lastTransitionTime":"2026-01-26T20:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.512027 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.531143 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.547349 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.558819 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.571695 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.583619 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.593732 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.610309 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.610401 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.610421 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.610858 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.611163 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:03Z","lastTransitionTime":"2026-01-26T20:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.713114 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.713141 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.713163 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.713176 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.713185 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:03Z","lastTransitionTime":"2026-01-26T20:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.815682 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.815722 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.815731 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.815748 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.815759 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:03Z","lastTransitionTime":"2026-01-26T20:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.905661 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:13:18.191885057 +0000 UTC Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.917604 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.917666 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.917685 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.917706 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.917767 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:03Z","lastTransitionTime":"2026-01-26T20:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:03 crc kubenswrapper[4899]: I0126 20:56:03.929792 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:03 crc kubenswrapper[4899]: E0126 20:56:03.929886 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.020631 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.020662 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.020670 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.020683 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.020692 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:04Z","lastTransitionTime":"2026-01-26T20:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.122965 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.122989 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.122997 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.123020 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.123029 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:04Z","lastTransitionTime":"2026-01-26T20:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.225213 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.225250 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.225259 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.225271 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.225279 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:04Z","lastTransitionTime":"2026-01-26T20:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.301137 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovnkube-controller/2.log" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.302022 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovnkube-controller/1.log" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.308652 4899 generic.go:334] "Generic (PLEG): container finished" podID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerID="96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64" exitCode=1 Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.308696 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerDied","Data":"96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64"} Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.308726 4899 scope.go:117] "RemoveContainer" containerID="5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.309707 4899 scope.go:117] "RemoveContainer" containerID="96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64" Jan 26 20:56:04 crc kubenswrapper[4899]: E0126 20:56:04.309886 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.319839 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.327077 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.327105 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.327115 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.327128 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.327138 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:04Z","lastTransitionTime":"2026-01-26T20:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.329025 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.352518 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0d367346d233980b7ac89a017bc421147e390640155e6959adeecca5c5892c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"pin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:55:44.956819 6379 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nF0126 20:55:44.956617 6379 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:55:44Z is after 2025-08-24T17:21:41Z]\\\\nI0126 20:55:44.956824\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\".go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0126 20:56:03.790087 6613 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0126 20:56:03.789987 6613 ovnkube.go:599] Stopped ovnkube\\\\nI0126 20:56:03.790028 6613 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vlmbq after 0 failed attempt(s)\\\\nI0126 20:56:03.790101 6613 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:56:03.790129 6613 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 20:56:03.789733 6613 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5s8xd\\\\nI0126 20:56:03.790199 6613 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5s8xd in node crc\\\\nF0126 20:56:03.790202 6613 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.362127 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f199371a-c546-4a47-b96b-be3768b02b36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6337a3488685aada7e51eb60802f0e97119ef23a01e08db83d1950da3b2755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac177559351b6d0279c6e1a62bb0bcf85ccbd161b8ac29079d7b463b964402b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ec6ab43e988842fc52ebf3b5f8e16b54cceabf1cadbde4641af6ee3c2fe837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.373514 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.384429 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.394307 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.406698 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.416257 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.427857 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.429222 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.429255 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.429263 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.429276 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.429285 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:04Z","lastTransitionTime":"2026-01-26T20:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.439595 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.452142 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.471420 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.482649 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.497976 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.511991 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.524396 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.531028 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.531078 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.531092 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.531112 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.531125 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:04Z","lastTransitionTime":"2026-01-26T20:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.634051 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.634098 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.634114 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.634147 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.634162 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:04Z","lastTransitionTime":"2026-01-26T20:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.644717 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.644773 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.644790 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.644813 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.644829 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:04Z","lastTransitionTime":"2026-01-26T20:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:04 crc kubenswrapper[4899]: E0126 20:56:04.666173 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.670315 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.670377 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.670400 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.670428 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.670449 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:04Z","lastTransitionTime":"2026-01-26T20:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:04 crc kubenswrapper[4899]: E0126 20:56:04.691557 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.695735 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.695768 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.695779 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.695793 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.695803 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:04Z","lastTransitionTime":"2026-01-26T20:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:04 crc kubenswrapper[4899]: E0126 20:56:04.714592 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.718024 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.718049 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.718057 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.718072 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.718098 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:04Z","lastTransitionTime":"2026-01-26T20:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:04 crc kubenswrapper[4899]: E0126 20:56:04.736847 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.741207 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.741233 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.741241 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.741265 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.741276 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:04Z","lastTransitionTime":"2026-01-26T20:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:04 crc kubenswrapper[4899]: E0126 20:56:04.758047 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:04Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:04 crc kubenswrapper[4899]: E0126 20:56:04.758343 4899 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.760647 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.760724 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.760746 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.760778 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.760807 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:04Z","lastTransitionTime":"2026-01-26T20:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.863333 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.863363 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.863373 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.863386 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.863395 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:04Z","lastTransitionTime":"2026-01-26T20:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.905854 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:13:25.757933765 +0000 UTC Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.929712 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.929789 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.929902 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:04 crc kubenswrapper[4899]: E0126 20:56:04.929906 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:04 crc kubenswrapper[4899]: E0126 20:56:04.930031 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:04 crc kubenswrapper[4899]: E0126 20:56:04.930105 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.966464 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.966521 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.966538 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.966561 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:04 crc kubenswrapper[4899]: I0126 20:56:04.966578 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:04Z","lastTransitionTime":"2026-01-26T20:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.070012 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.070065 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.070086 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.070109 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.070125 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:05Z","lastTransitionTime":"2026-01-26T20:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.172921 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.173066 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.173090 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.173117 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.173159 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:05Z","lastTransitionTime":"2026-01-26T20:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.276458 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.276534 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.276557 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.276582 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.276598 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:05Z","lastTransitionTime":"2026-01-26T20:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.313685 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovnkube-controller/2.log" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.318119 4899 scope.go:117] "RemoveContainer" containerID="96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64" Jan 26 20:56:05 crc kubenswrapper[4899]: E0126 20:56:05.318369 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.333431 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.344817 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.357874 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.369581 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.378911 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.378988 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.379006 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.379028 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.379046 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:05Z","lastTransitionTime":"2026-01-26T20:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.385178 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.402279 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.414620 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.429541 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.441327 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.465413 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\".go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0126 20:56:03.790087 6613 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0126 20:56:03.789987 6613 ovnkube.go:599] Stopped ovnkube\\\\nI0126 20:56:03.790028 6613 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vlmbq after 0 failed attempt(s)\\\\nI0126 20:56:03.790101 6613 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:56:03.790129 6613 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 20:56:03.789733 6613 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5s8xd\\\\nI0126 20:56:03.790199 6613 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5s8xd in node crc\\\\nF0126 20:56:03.790202 6613 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:56:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.479309 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f199371a-c546-4a47-b96b-be3768b02b36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6337a3488685aada7e51eb60802f0e97119ef23a01e08db83d1950da3b2755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac177559351b6d0279c6e1a62bb0bcf85ccbd161b8ac29079d7b463b964402b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ec6ab43e988842fc52ebf3b5f8e16b54cceabf1cadbde4641af6ee3c2fe837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.481151 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.481199 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.481217 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.481243 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.481262 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:05Z","lastTransitionTime":"2026-01-26T20:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.496100 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.513392 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.526232 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.544797 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.560456 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.575056 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:05Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.583921 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.583983 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.583996 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.584015 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.584029 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:05Z","lastTransitionTime":"2026-01-26T20:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.686117 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.686157 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.686168 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.686185 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.686196 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:05Z","lastTransitionTime":"2026-01-26T20:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.788405 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.788433 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.788441 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.788456 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.788466 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:05Z","lastTransitionTime":"2026-01-26T20:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.890495 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.890567 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.890587 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.890612 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.890637 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:05Z","lastTransitionTime":"2026-01-26T20:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.906292 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:07:10.083564127 +0000 UTC Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.942070 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:05 crc kubenswrapper[4899]: E0126 20:56:05.942222 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.994188 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.994263 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.994285 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.994315 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:05 crc kubenswrapper[4899]: I0126 20:56:05.994337 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:05Z","lastTransitionTime":"2026-01-26T20:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.096951 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.096992 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.097005 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.097022 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.097033 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:06Z","lastTransitionTime":"2026-01-26T20:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.200538 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.200595 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.200701 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.200726 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.200744 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:06Z","lastTransitionTime":"2026-01-26T20:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.303556 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.303600 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.303612 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.303628 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.303650 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:06Z","lastTransitionTime":"2026-01-26T20:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.405618 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.405661 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.405671 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.405690 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.405699 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:06Z","lastTransitionTime":"2026-01-26T20:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.508696 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.508764 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.508780 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.508800 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.508815 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:06Z","lastTransitionTime":"2026-01-26T20:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.611565 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.611598 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.611607 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.611620 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.611629 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:06Z","lastTransitionTime":"2026-01-26T20:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.714468 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.714512 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.714524 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.714541 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.714553 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:06Z","lastTransitionTime":"2026-01-26T20:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.816907 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.817206 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.817229 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.817249 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.817264 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:06Z","lastTransitionTime":"2026-01-26T20:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.906379 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:49:50.692133677 +0000 UTC Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.918917 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.918965 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.918977 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.919052 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.919067 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:06Z","lastTransitionTime":"2026-01-26T20:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.929727 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.929724 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:06 crc kubenswrapper[4899]: E0126 20:56:06.929858 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:06 crc kubenswrapper[4899]: I0126 20:56:06.929750 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:06 crc kubenswrapper[4899]: E0126 20:56:06.929919 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:06 crc kubenswrapper[4899]: E0126 20:56:06.930096 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.020900 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.021011 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.021039 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.021072 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.021092 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:07Z","lastTransitionTime":"2026-01-26T20:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.124192 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.124243 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.124260 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.124303 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.124321 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:07Z","lastTransitionTime":"2026-01-26T20:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.227249 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.227330 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.227357 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.227391 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.227416 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:07Z","lastTransitionTime":"2026-01-26T20:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.330204 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.330281 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.330306 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.330332 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.330350 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:07Z","lastTransitionTime":"2026-01-26T20:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.432990 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.433053 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.433071 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.433265 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.433282 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:07Z","lastTransitionTime":"2026-01-26T20:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.536250 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.536321 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.536338 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.536363 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.536381 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:07Z","lastTransitionTime":"2026-01-26T20:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.639464 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.639613 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.639633 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.639663 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.639684 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:07Z","lastTransitionTime":"2026-01-26T20:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.743244 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.743306 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.743322 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.743349 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.743367 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:07Z","lastTransitionTime":"2026-01-26T20:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.845843 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.845871 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.845879 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.845892 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.845903 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:07Z","lastTransitionTime":"2026-01-26T20:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.907264 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 08:34:52.11873047 +0000 UTC Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.930637 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:07 crc kubenswrapper[4899]: E0126 20:56:07.930871 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.948480 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.948539 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.948558 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.948582 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:07 crc kubenswrapper[4899]: I0126 20:56:07.948602 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:07Z","lastTransitionTime":"2026-01-26T20:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.051887 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.051976 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.052013 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.052055 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.052091 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:08Z","lastTransitionTime":"2026-01-26T20:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.156111 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.156221 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.156249 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.156279 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.156302 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:08Z","lastTransitionTime":"2026-01-26T20:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.263984 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.264507 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.264707 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.264911 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.265195 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:08Z","lastTransitionTime":"2026-01-26T20:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.368239 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.368302 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.368320 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.368347 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.368364 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:08Z","lastTransitionTime":"2026-01-26T20:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.471098 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.471166 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.471250 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.471282 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.471305 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:08Z","lastTransitionTime":"2026-01-26T20:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.574524 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.574572 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.574586 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.574603 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.574618 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:08Z","lastTransitionTime":"2026-01-26T20:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.677072 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.677105 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.677116 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.677131 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.677141 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:08Z","lastTransitionTime":"2026-01-26T20:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.779487 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.779527 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.779537 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.779552 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.779561 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:08Z","lastTransitionTime":"2026-01-26T20:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.881757 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.881801 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.881812 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.881829 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.881839 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:08Z","lastTransitionTime":"2026-01-26T20:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.907578 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 15:04:20.671322057 +0000 UTC Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.929950 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.930012 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.930048 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:08 crc kubenswrapper[4899]: E0126 20:56:08.930095 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:08 crc kubenswrapper[4899]: E0126 20:56:08.930166 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:08 crc kubenswrapper[4899]: E0126 20:56:08.930275 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.984147 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.984185 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.984197 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.984213 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:08 crc kubenswrapper[4899]: I0126 20:56:08.984236 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:08Z","lastTransitionTime":"2026-01-26T20:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.086914 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.086971 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.086984 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.086999 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.087010 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:09Z","lastTransitionTime":"2026-01-26T20:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.188966 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.189270 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.189357 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.189442 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.189541 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:09Z","lastTransitionTime":"2026-01-26T20:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.292354 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.292417 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.292435 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.292460 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.292478 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:09Z","lastTransitionTime":"2026-01-26T20:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.394540 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.394578 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.394587 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.394601 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.394611 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:09Z","lastTransitionTime":"2026-01-26T20:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.496946 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.497195 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.497254 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.497326 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.497388 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:09Z","lastTransitionTime":"2026-01-26T20:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.599440 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.599498 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.599514 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.599537 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.599557 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:09Z","lastTransitionTime":"2026-01-26T20:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.703628 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.703699 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.703720 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.703748 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.703769 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:09Z","lastTransitionTime":"2026-01-26T20:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.807248 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.807300 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.807309 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.807325 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.807354 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:09Z","lastTransitionTime":"2026-01-26T20:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.908278 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 17:31:53.967292762 +0000 UTC Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.910008 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.910032 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.910040 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.910053 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.910062 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:09Z","lastTransitionTime":"2026-01-26T20:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:09 crc kubenswrapper[4899]: I0126 20:56:09.929583 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:09 crc kubenswrapper[4899]: E0126 20:56:09.929672 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.012135 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.012293 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.012325 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.012356 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.012377 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:10Z","lastTransitionTime":"2026-01-26T20:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.115463 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.115503 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.115515 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.115535 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.115547 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:10Z","lastTransitionTime":"2026-01-26T20:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.217830 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.217870 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.217881 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.217958 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.217975 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:10Z","lastTransitionTime":"2026-01-26T20:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.321866 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.321909 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.321918 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.322020 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.322032 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:10Z","lastTransitionTime":"2026-01-26T20:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.424879 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.424954 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.424966 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.424987 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.424999 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:10Z","lastTransitionTime":"2026-01-26T20:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.527830 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.528121 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.528200 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.528313 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.528384 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:10Z","lastTransitionTime":"2026-01-26T20:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.630271 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.630590 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.630698 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.630862 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.630975 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:10Z","lastTransitionTime":"2026-01-26T20:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.733052 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.733196 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.733212 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.733232 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.733246 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:10Z","lastTransitionTime":"2026-01-26T20:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.835710 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.835739 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.835765 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.835778 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.835786 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:10Z","lastTransitionTime":"2026-01-26T20:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.908581 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:31:21.199424212 +0000 UTC Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.929603 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.929644 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:10 crc kubenswrapper[4899]: E0126 20:56:10.929750 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.929973 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:10 crc kubenswrapper[4899]: E0126 20:56:10.930026 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:10 crc kubenswrapper[4899]: E0126 20:56:10.930183 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.937376 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.937416 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.937428 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.937443 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.937456 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:10Z","lastTransitionTime":"2026-01-26T20:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.947396 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:10Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.960839 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:10Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.977176 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:10Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:10 crc kubenswrapper[4899]: I0126 20:56:10.995039 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:10Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.013522 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.029565 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.039224 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.039260 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.039271 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.039286 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.039298 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:11Z","lastTransitionTime":"2026-01-26T20:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.046988 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.065096 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.076150 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.089141 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.107655 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.121057 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.134235 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.141150 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.141217 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.141242 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.141271 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.141309 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:11Z","lastTransitionTime":"2026-01-26T20:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.147506 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.159834 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.180470 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\".go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0126 20:56:03.790087 6613 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0126 20:56:03.789987 6613 ovnkube.go:599] Stopped ovnkube\\\\nI0126 20:56:03.790028 6613 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vlmbq after 0 failed attempt(s)\\\\nI0126 20:56:03.790101 6613 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:56:03.790129 6613 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 20:56:03.789733 6613 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5s8xd\\\\nI0126 20:56:03.790199 6613 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5s8xd in node crc\\\\nF0126 20:56:03.790202 6613 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:56:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.195668 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f199371a-c546-4a47-b96b-be3768b02b36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6337a3488685aada7e51eb60802f0e97119ef23a01e08db83d1950da3b2755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac177559351b6d0279c6e1a62bb0bcf85ccbd161b8ac29079d7b463b964402b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ec6ab43e988842fc52ebf3b5f8e16b54cceabf1cadbde4641af6ee3c2fe837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.244145 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.244176 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.244184 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.244197 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.244205 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:11Z","lastTransitionTime":"2026-01-26T20:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.346454 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.346486 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.346494 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.346506 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.346516 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:11Z","lastTransitionTime":"2026-01-26T20:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.449519 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.449560 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.449588 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.449606 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.449616 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:11Z","lastTransitionTime":"2026-01-26T20:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.552620 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.552689 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.552726 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.552756 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.552777 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:11Z","lastTransitionTime":"2026-01-26T20:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.656671 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.657441 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.657533 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.657625 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.657696 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:11Z","lastTransitionTime":"2026-01-26T20:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.760638 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.760894 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.761022 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.761121 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.761227 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:11Z","lastTransitionTime":"2026-01-26T20:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.863638 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.863856 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.863997 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.864119 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.864220 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:11Z","lastTransitionTime":"2026-01-26T20:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.909098 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 00:05:41.771878173 +0000 UTC Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.930210 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:11 crc kubenswrapper[4899]: E0126 20:56:11.930412 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.967107 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.967354 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.967437 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.967533 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:11 crc kubenswrapper[4899]: I0126 20:56:11.967596 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:11Z","lastTransitionTime":"2026-01-26T20:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.069891 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.069924 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.069946 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.069960 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.069969 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:12Z","lastTransitionTime":"2026-01-26T20:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.172742 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.172778 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.172786 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.172801 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.172812 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:12Z","lastTransitionTime":"2026-01-26T20:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.274995 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.275041 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.275060 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.275079 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.275102 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:12Z","lastTransitionTime":"2026-01-26T20:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.377552 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.377590 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.377601 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.377620 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.377634 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:12Z","lastTransitionTime":"2026-01-26T20:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.480749 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.480804 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.480820 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.480850 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.480867 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:12Z","lastTransitionTime":"2026-01-26T20:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.583662 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.583731 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.583761 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.583803 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.583826 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:12Z","lastTransitionTime":"2026-01-26T20:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.686480 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.686524 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.686540 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.686596 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.686610 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:12Z","lastTransitionTime":"2026-01-26T20:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.789577 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.789638 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.789655 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.789678 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.789695 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:12Z","lastTransitionTime":"2026-01-26T20:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.892707 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.892815 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.892835 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.892888 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.892907 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:12Z","lastTransitionTime":"2026-01-26T20:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.909288 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:58:59.259304159 +0000 UTC Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.930053 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.930114 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:12 crc kubenswrapper[4899]: E0126 20:56:12.930271 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.930309 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:12 crc kubenswrapper[4899]: E0126 20:56:12.930558 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:12 crc kubenswrapper[4899]: E0126 20:56:12.930737 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.996508 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.996571 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.996593 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.996620 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:12 crc kubenswrapper[4899]: I0126 20:56:12.996639 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:12Z","lastTransitionTime":"2026-01-26T20:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.099181 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.099282 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.099298 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.099313 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.099325 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:13Z","lastTransitionTime":"2026-01-26T20:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.202579 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.202654 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.202673 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.203415 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.203483 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:13Z","lastTransitionTime":"2026-01-26T20:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.306677 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.306724 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.306741 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.306767 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.306785 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:13Z","lastTransitionTime":"2026-01-26T20:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.410557 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.410597 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.410605 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.410622 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.410632 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:13Z","lastTransitionTime":"2026-01-26T20:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.518274 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.518345 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.518371 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.518618 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.518644 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:13Z","lastTransitionTime":"2026-01-26T20:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.621554 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.621608 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.621625 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.621646 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.621660 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:13Z","lastTransitionTime":"2026-01-26T20:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.723995 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.724241 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.724324 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.724415 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.724488 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:13Z","lastTransitionTime":"2026-01-26T20:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.827019 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.827076 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.827094 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.827122 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.827139 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:13Z","lastTransitionTime":"2026-01-26T20:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.910137 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:25:26.073454854 +0000 UTC Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.929649 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:13 crc kubenswrapper[4899]: E0126 20:56:13.929872 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.930395 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.930586 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.930734 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.930880 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:13 crc kubenswrapper[4899]: I0126 20:56:13.931080 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:13Z","lastTransitionTime":"2026-01-26T20:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.035026 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.035091 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.035107 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.035126 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.035140 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:14Z","lastTransitionTime":"2026-01-26T20:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.138389 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.138429 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.138455 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.138471 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.138482 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:14Z","lastTransitionTime":"2026-01-26T20:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.241291 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.241343 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.241355 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.241379 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.241391 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:14Z","lastTransitionTime":"2026-01-26T20:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.344239 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.344532 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.344550 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.344573 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.344611 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:14Z","lastTransitionTime":"2026-01-26T20:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.447952 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.448008 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.448021 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.448045 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.448062 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:14Z","lastTransitionTime":"2026-01-26T20:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.551443 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.551498 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.551510 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.551531 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.551547 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:14Z","lastTransitionTime":"2026-01-26T20:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.654227 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.654263 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.654274 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.654288 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.654299 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:14Z","lastTransitionTime":"2026-01-26T20:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.756462 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.756511 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.756523 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.756542 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.756556 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:14Z","lastTransitionTime":"2026-01-26T20:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.774168 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.774200 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.774208 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.774224 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.774239 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:14Z","lastTransitionTime":"2026-01-26T20:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:14 crc kubenswrapper[4899]: E0126 20:56:14.786644 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.790962 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.791011 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.791025 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.791043 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.791056 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:14Z","lastTransitionTime":"2026-01-26T20:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:14 crc kubenswrapper[4899]: E0126 20:56:14.807905 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.818348 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.818386 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.818395 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.818410 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.818419 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:14Z","lastTransitionTime":"2026-01-26T20:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:14 crc kubenswrapper[4899]: E0126 20:56:14.833421 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.837888 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.837951 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.837969 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.837989 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.838004 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:14Z","lastTransitionTime":"2026-01-26T20:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:14 crc kubenswrapper[4899]: E0126 20:56:14.850963 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.856114 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.856269 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.856344 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.856413 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.856474 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:14Z","lastTransitionTime":"2026-01-26T20:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:14 crc kubenswrapper[4899]: E0126 20:56:14.871495 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:14 crc kubenswrapper[4899]: E0126 20:56:14.871728 4899 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.873180 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.873225 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.873238 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.873256 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.873266 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:14Z","lastTransitionTime":"2026-01-26T20:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.910683 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 13:32:11.210516989 +0000 UTC Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.929882 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.929903 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:14 crc kubenswrapper[4899]: E0126 20:56:14.930097 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:14 crc kubenswrapper[4899]: E0126 20:56:14.930263 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.930288 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:14 crc kubenswrapper[4899]: E0126 20:56:14.930388 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.975275 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.975316 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.975325 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.975340 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:14 crc kubenswrapper[4899]: I0126 20:56:14.975349 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:14Z","lastTransitionTime":"2026-01-26T20:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.077839 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.077868 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.077876 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.077890 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.077898 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:15Z","lastTransitionTime":"2026-01-26T20:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.179710 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.179995 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.180104 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.180199 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.180278 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:15Z","lastTransitionTime":"2026-01-26T20:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.284568 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.284799 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.284857 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.284938 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.285004 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:15Z","lastTransitionTime":"2026-01-26T20:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.387156 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.387403 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.387466 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.387528 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.387594 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:15Z","lastTransitionTime":"2026-01-26T20:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.489279 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.489566 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.489663 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.489728 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.489798 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:15Z","lastTransitionTime":"2026-01-26T20:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.592407 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.592463 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.592480 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.592505 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.592523 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:15Z","lastTransitionTime":"2026-01-26T20:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.695258 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.695288 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.695300 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.695314 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.695360 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:15Z","lastTransitionTime":"2026-01-26T20:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.797273 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.797334 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.797344 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.797363 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.797372 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:15Z","lastTransitionTime":"2026-01-26T20:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.899122 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.899147 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.899155 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.899167 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.899192 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:15Z","lastTransitionTime":"2026-01-26T20:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.910970 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:06:50.344271655 +0000 UTC Jan 26 20:56:15 crc kubenswrapper[4899]: I0126 20:56:15.930245 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:15 crc kubenswrapper[4899]: E0126 20:56:15.930332 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.002756 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.002778 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.002788 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.002800 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.002809 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:16Z","lastTransitionTime":"2026-01-26T20:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.106678 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.106717 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.106730 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.106746 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.106756 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:16Z","lastTransitionTime":"2026-01-26T20:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.209033 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.209079 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.209093 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.209143 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.209155 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:16Z","lastTransitionTime":"2026-01-26T20:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.311461 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.311741 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.311807 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.311887 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.311980 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:16Z","lastTransitionTime":"2026-01-26T20:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.415109 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.415173 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.415192 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.415216 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.415234 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:16Z","lastTransitionTime":"2026-01-26T20:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.522254 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.522541 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.522618 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.522701 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.522764 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:16Z","lastTransitionTime":"2026-01-26T20:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.625147 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.625498 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.625601 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.625691 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.625775 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:16Z","lastTransitionTime":"2026-01-26T20:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.728616 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.728715 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.728744 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.728781 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.728806 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:16Z","lastTransitionTime":"2026-01-26T20:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.831552 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.831621 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.831638 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.831666 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.831686 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:16Z","lastTransitionTime":"2026-01-26T20:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.911221 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 01:32:54.576080471 +0000 UTC Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.935204 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.935385 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:16 crc kubenswrapper[4899]: E0126 20:56:16.936383 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:16 crc kubenswrapper[4899]: E0126 20:56:16.935572 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.938502 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.938536 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.938553 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.938573 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.938587 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:16Z","lastTransitionTime":"2026-01-26T20:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:16 crc kubenswrapper[4899]: I0126 20:56:16.942536 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:16 crc kubenswrapper[4899]: E0126 20:56:16.942722 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.042312 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.042630 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.042728 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.042812 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.042909 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:17Z","lastTransitionTime":"2026-01-26T20:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.146032 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.146066 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.146079 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.146096 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.146110 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:17Z","lastTransitionTime":"2026-01-26T20:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.248674 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.248708 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.248720 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.248735 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.248746 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:17Z","lastTransitionTime":"2026-01-26T20:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.351612 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.351654 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.351666 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.351682 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.351693 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:17Z","lastTransitionTime":"2026-01-26T20:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.454793 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.454834 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.454843 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.454859 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.454868 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:17Z","lastTransitionTime":"2026-01-26T20:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.557570 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.557798 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.557903 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.557990 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.558060 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:17Z","lastTransitionTime":"2026-01-26T20:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.660042 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.660072 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.660081 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.660094 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.660105 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:17Z","lastTransitionTime":"2026-01-26T20:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.763216 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.763466 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.763757 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.764053 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.764340 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:17Z","lastTransitionTime":"2026-01-26T20:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.867995 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.868267 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.868411 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.868520 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.868619 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:17Z","lastTransitionTime":"2026-01-26T20:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.911739 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:45:01.615532504 +0000 UTC Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.930125 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:17 crc kubenswrapper[4899]: E0126 20:56:17.930265 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.970761 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.970813 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.970824 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.970842 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:17 crc kubenswrapper[4899]: I0126 20:56:17.970856 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:17Z","lastTransitionTime":"2026-01-26T20:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.073243 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.073272 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.073282 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.073295 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.073304 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:18Z","lastTransitionTime":"2026-01-26T20:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.176083 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.176119 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.176130 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.176145 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.176156 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:18Z","lastTransitionTime":"2026-01-26T20:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.278498 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.278735 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.278814 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.278953 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.279041 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:18Z","lastTransitionTime":"2026-01-26T20:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.377682 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs\") pod \"network-metrics-daemon-5s8xd\" (UID: \"88f49476-befa-4689-91cb-c0a8cc1def3d\") " pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:18 crc kubenswrapper[4899]: E0126 20:56:18.377790 4899 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 20:56:18 crc kubenswrapper[4899]: E0126 20:56:18.377830 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs podName:88f49476-befa-4689-91cb-c0a8cc1def3d nodeName:}" failed. No retries permitted until 2026-01-26 20:56:50.377816801 +0000 UTC m=+99.759404838 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs") pod "network-metrics-daemon-5s8xd" (UID: "88f49476-befa-4689-91cb-c0a8cc1def3d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.383533 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.383718 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.383806 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.383879 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.383966 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:18Z","lastTransitionTime":"2026-01-26T20:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.491089 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.491298 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.491355 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.491412 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.491476 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:18Z","lastTransitionTime":"2026-01-26T20:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.594157 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.594201 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.594213 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.594229 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.594240 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:18Z","lastTransitionTime":"2026-01-26T20:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.698133 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.698172 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.698180 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.698195 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.698204 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:18Z","lastTransitionTime":"2026-01-26T20:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.801373 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.801414 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.801426 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.801443 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.801454 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:18Z","lastTransitionTime":"2026-01-26T20:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.903629 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.903663 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.903690 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.903709 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.903726 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:18Z","lastTransitionTime":"2026-01-26T20:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.912816 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 03:16:18.945841958 +0000 UTC Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.930061 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:18 crc kubenswrapper[4899]: E0126 20:56:18.930152 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.930310 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:18 crc kubenswrapper[4899]: E0126 20:56:18.930356 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.930458 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:18 crc kubenswrapper[4899]: E0126 20:56:18.930498 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:18 crc kubenswrapper[4899]: I0126 20:56:18.931108 4899 scope.go:117] "RemoveContainer" containerID="96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64" Jan 26 20:56:18 crc kubenswrapper[4899]: E0126 20:56:18.931338 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.006731 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.006798 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.006810 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.006846 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.006859 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:19Z","lastTransitionTime":"2026-01-26T20:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.109230 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.109269 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.109281 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.109297 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.109307 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:19Z","lastTransitionTime":"2026-01-26T20:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.211663 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.211708 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.211720 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.211739 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.211751 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:19Z","lastTransitionTime":"2026-01-26T20:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.314352 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.314385 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.314393 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.314406 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.314414 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:19Z","lastTransitionTime":"2026-01-26T20:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.362594 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-24sf9_595ae596-1477-4438-94f7-69400dc1ba20/kube-multus/0.log" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.362651 4899 generic.go:334] "Generic (PLEG): container finished" podID="595ae596-1477-4438-94f7-69400dc1ba20" containerID="04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5" exitCode=1 Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.362677 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-24sf9" event={"ID":"595ae596-1477-4438-94f7-69400dc1ba20","Type":"ContainerDied","Data":"04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5"} Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.363138 4899 scope.go:117] "RemoveContainer" containerID="04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.377631 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.397605 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.409683 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.417227 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.417268 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.417281 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.417299 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.417312 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:19Z","lastTransitionTime":"2026-01-26T20:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.426119 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.441845 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:19Z\\\",\\\"message\\\":\\\"2026-01-26T20:55:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_89ee65d1-b1d9-44d2-beb7-723869cffad8\\\\n2026-01-26T20:55:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_89ee65d1-b1d9-44d2-beb7-723869cffad8 to /host/opt/cni/bin/\\\\n2026-01-26T20:55:33Z [verbose] multus-daemon started\\\\n2026-01-26T20:55:33Z [verbose] Readiness Indicator file check\\\\n2026-01-26T20:56:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.466902 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.480194 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.497636 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.515375 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.519138 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.519377 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.519480 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.519583 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.519697 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:19Z","lastTransitionTime":"2026-01-26T20:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.528067 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.541023 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f199371a-c546-4a47-b96b-be3768b02b36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6337a3488685aada7e51eb60802f0e97119ef23a01e08db83d1950da3b2755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac177559351b6d0279c6e1a62bb0bcf85ccbd161b8ac29079d7b463b964402b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ec6ab43e988842fc52ebf3b5f8e16b54cceabf1cadbde4641af6ee3c2fe837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.553415 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.565605 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.575782 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.588250 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.598521 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.619019 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\".go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0126 20:56:03.790087 6613 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0126 20:56:03.789987 6613 ovnkube.go:599] Stopped ovnkube\\\\nI0126 20:56:03.790028 6613 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vlmbq after 0 failed attempt(s)\\\\nI0126 20:56:03.790101 6613 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:56:03.790129 6613 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 20:56:03.789733 6613 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5s8xd\\\\nI0126 20:56:03.790199 6613 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5s8xd in node crc\\\\nF0126 20:56:03.790202 6613 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:56:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.622306 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.622367 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.622385 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.622409 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.622427 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:19Z","lastTransitionTime":"2026-01-26T20:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.724105 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.724136 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.724147 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.724162 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.724171 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:19Z","lastTransitionTime":"2026-01-26T20:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.826361 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.826393 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.826403 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.826417 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.826426 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:19Z","lastTransitionTime":"2026-01-26T20:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.913210 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 18:54:21.570834135 +0000 UTC Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.928733 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.928884 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.929024 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.929127 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.929212 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:19Z","lastTransitionTime":"2026-01-26T20:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:19 crc kubenswrapper[4899]: I0126 20:56:19.929756 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:19 crc kubenswrapper[4899]: E0126 20:56:19.930014 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.031854 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.031903 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.031914 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.031949 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.031962 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:20Z","lastTransitionTime":"2026-01-26T20:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.134053 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.134086 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.134096 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.134112 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.134126 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:20Z","lastTransitionTime":"2026-01-26T20:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.236527 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.236561 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.236569 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.236583 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.236594 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:20Z","lastTransitionTime":"2026-01-26T20:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.339187 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.339224 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.339233 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.339247 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.339255 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:20Z","lastTransitionTime":"2026-01-26T20:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.367260 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-24sf9_595ae596-1477-4438-94f7-69400dc1ba20/kube-multus/0.log" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.367317 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-24sf9" event={"ID":"595ae596-1477-4438-94f7-69400dc1ba20","Type":"ContainerStarted","Data":"a67d24e714a631cf429c1b815c09cd81ec904b03b09d1bcab788de52458822f5"} Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.378474 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.388560 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.403249 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.427485 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.441860 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.441895 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.441903 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.441918 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.441954 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:20Z","lastTransitionTime":"2026-01-26T20:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.442411 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.459104 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.475581 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67d24e714a631cf429c1b815c09cd81ec904b03b09d1bcab788de52458822f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:19Z\\\",\\\"message\\\":\\\"2026-01-26T20:55:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_89ee65d1-b1d9-44d2-beb7-723869cffad8\\\\n2026-01-26T20:55:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_89ee65d1-b1d9-44d2-beb7-723869cffad8 to /host/opt/cni/bin/\\\\n2026-01-26T20:55:33Z [verbose] multus-daemon started\\\\n2026-01-26T20:55:33Z [verbose] Readiness Indicator file check\\\\n2026-01-26T20:56:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.495282 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.509564 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.521401 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.538661 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.544404 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.544443 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.544456 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.544472 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.544483 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:20Z","lastTransitionTime":"2026-01-26T20:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.550576 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.561024 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.569624 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.589357 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\".go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0126 20:56:03.790087 6613 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0126 20:56:03.789987 6613 ovnkube.go:599] Stopped ovnkube\\\\nI0126 20:56:03.790028 6613 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vlmbq after 0 failed attempt(s)\\\\nI0126 20:56:03.790101 6613 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:56:03.790129 6613 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 20:56:03.789733 6613 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5s8xd\\\\nI0126 20:56:03.790199 6613 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5s8xd in node crc\\\\nF0126 20:56:03.790202 6613 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:56:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.598771 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f199371a-c546-4a47-b96b-be3768b02b36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6337a3488685aada7e51eb60802f0e97119ef23a01e08db83d1950da3b2755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac177559351b6d0279c6e1a62bb0bcf85ccbd161b8ac29079d7b463b964402b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ec6ab43e988842fc52ebf3b5f8e16b54cceabf1cadbde4641af6ee3c2fe837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.611969 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.646430 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.646456 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.646464 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.646476 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.646484 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:20Z","lastTransitionTime":"2026-01-26T20:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.748823 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.748860 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.748869 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.748883 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.748892 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:20Z","lastTransitionTime":"2026-01-26T20:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.851193 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.851506 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.851605 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.851698 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.851785 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:20Z","lastTransitionTime":"2026-01-26T20:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.913381 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:45:32.313431367 +0000 UTC Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.930471 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:20 crc kubenswrapper[4899]: E0126 20:56:20.930629 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.930859 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:20 crc kubenswrapper[4899]: E0126 20:56:20.931001 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.931183 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:20 crc kubenswrapper[4899]: E0126 20:56:20.931293 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.944868 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.954781 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.954998 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.955092 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.955181 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.955173 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.955269 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:20Z","lastTransitionTime":"2026-01-26T20:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.966506 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.978295 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67d24e714a631cf429c1b815c09cd81ec904b03b09d1bcab788de52458822f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:19Z\\\",\\\"message\\\":\\\"2026-01-26T20:55:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_89ee65d1-b1d9-44d2-beb7-723869cffad8\\\\n2026-01-26T20:55:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_89ee65d1-b1d9-44d2-beb7-723869cffad8 to /host/opt/cni/bin/\\\\n2026-01-26T20:55:33Z [verbose] multus-daemon started\\\\n2026-01-26T20:55:33Z [verbose] Readiness Indicator file check\\\\n2026-01-26T20:56:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:20 crc kubenswrapper[4899]: I0126 20:56:20.988193 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.045844 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.056706 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.056735 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.056744 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.056760 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.056769 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:21Z","lastTransitionTime":"2026-01-26T20:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.057351 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.070631 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.080205 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.089322 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.097495 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.115520 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\".go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0126 20:56:03.790087 6613 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0126 20:56:03.789987 6613 ovnkube.go:599] Stopped ovnkube\\\\nI0126 20:56:03.790028 6613 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vlmbq after 0 failed attempt(s)\\\\nI0126 20:56:03.790101 6613 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:56:03.790129 6613 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 20:56:03.789733 6613 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5s8xd\\\\nI0126 20:56:03.790199 6613 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5s8xd in node crc\\\\nF0126 20:56:03.790202 6613 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:56:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.126618 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f199371a-c546-4a47-b96b-be3768b02b36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6337a3488685aada7e51eb60802f0e97119ef23a01e08db83d1950da3b2755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac177559351b6d0279c6e1a62bb0bcf85ccbd161b8ac29079d7b463b964402b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ec6ab43e988842fc52ebf3b5f8e16b54cceabf1cadbde4641af6ee3c2fe837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.139415 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.148888 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.159809 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.159864 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.159873 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.159887 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.159898 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:21Z","lastTransitionTime":"2026-01-26T20:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.159951 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.170013 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.265659 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.265711 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.265722 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.265736 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.265745 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:21Z","lastTransitionTime":"2026-01-26T20:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.368284 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.368315 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.368326 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.368340 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.368350 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:21Z","lastTransitionTime":"2026-01-26T20:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.470022 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.470063 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.470072 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.470086 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.470097 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:21Z","lastTransitionTime":"2026-01-26T20:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.572158 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.572392 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.572452 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.572512 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.572608 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:21Z","lastTransitionTime":"2026-01-26T20:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.675442 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.675481 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.675491 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.675506 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.675516 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:21Z","lastTransitionTime":"2026-01-26T20:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.778188 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.778503 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.778582 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.778649 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.778707 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:21Z","lastTransitionTime":"2026-01-26T20:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.880989 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.881226 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.881321 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.881390 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.881450 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:21Z","lastTransitionTime":"2026-01-26T20:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.913749 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 18:36:51.53579217 +0000 UTC Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.930130 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:21 crc kubenswrapper[4899]: E0126 20:56:21.930279 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.984049 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.984092 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.984108 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.984128 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:21 crc kubenswrapper[4899]: I0126 20:56:21.984143 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:21Z","lastTransitionTime":"2026-01-26T20:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.086709 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.086748 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.086760 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.086774 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.086784 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:22Z","lastTransitionTime":"2026-01-26T20:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.189015 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.189053 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.189068 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.189086 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.189098 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:22Z","lastTransitionTime":"2026-01-26T20:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.291792 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.291831 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.291839 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.291854 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.291866 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:22Z","lastTransitionTime":"2026-01-26T20:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.394073 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.394120 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.394134 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.394153 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.394164 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:22Z","lastTransitionTime":"2026-01-26T20:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.496206 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.496248 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.496259 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.496274 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.496284 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:22Z","lastTransitionTime":"2026-01-26T20:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.599238 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.599559 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.599703 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.599832 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.600000 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:22Z","lastTransitionTime":"2026-01-26T20:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.702906 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.703236 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.703298 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.703382 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.703487 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:22Z","lastTransitionTime":"2026-01-26T20:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.806823 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.806874 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.806884 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.806900 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.806910 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:22Z","lastTransitionTime":"2026-01-26T20:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.910188 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.910519 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.910640 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.910743 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.910831 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:22Z","lastTransitionTime":"2026-01-26T20:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.914675 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 02:40:56.178106837 +0000 UTC Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.930539 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.930586 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:22 crc kubenswrapper[4899]: I0126 20:56:22.930649 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:22 crc kubenswrapper[4899]: E0126 20:56:22.930680 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:22 crc kubenswrapper[4899]: E0126 20:56:22.931036 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:22 crc kubenswrapper[4899]: E0126 20:56:22.931109 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.013073 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.013115 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.013123 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.013136 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.013144 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:23Z","lastTransitionTime":"2026-01-26T20:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.116103 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.116149 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.116160 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.116176 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.116189 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:23Z","lastTransitionTime":"2026-01-26T20:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.219072 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.219326 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.219404 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.219486 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.219568 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:23Z","lastTransitionTime":"2026-01-26T20:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.323182 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.323234 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.323252 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.323282 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.323300 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:23Z","lastTransitionTime":"2026-01-26T20:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.426733 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.426810 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.426830 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.426860 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.426883 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:23Z","lastTransitionTime":"2026-01-26T20:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.529645 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.529721 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.529812 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.529895 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.529965 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:23Z","lastTransitionTime":"2026-01-26T20:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.633382 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.633648 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.633722 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.633801 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.633868 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:23Z","lastTransitionTime":"2026-01-26T20:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.736404 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.736459 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.736471 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.736493 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.736507 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:23Z","lastTransitionTime":"2026-01-26T20:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.840291 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.840542 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.840641 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.840755 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.840836 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:23Z","lastTransitionTime":"2026-01-26T20:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.915458 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:21:47.204790314 +0000 UTC Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.929773 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:23 crc kubenswrapper[4899]: E0126 20:56:23.930027 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.944030 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.944083 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.944113 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.944134 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:23 crc kubenswrapper[4899]: I0126 20:56:23.944148 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:23Z","lastTransitionTime":"2026-01-26T20:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.046821 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.047076 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.047116 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.047145 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.047163 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:24Z","lastTransitionTime":"2026-01-26T20:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.151175 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.151220 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.151229 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.151245 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.151255 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:24Z","lastTransitionTime":"2026-01-26T20:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.255033 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.255079 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.255089 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.255105 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.255115 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:24Z","lastTransitionTime":"2026-01-26T20:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.358123 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.358385 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.358471 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.358540 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.358600 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:24Z","lastTransitionTime":"2026-01-26T20:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.462329 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.462382 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.462394 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.462414 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.462426 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:24Z","lastTransitionTime":"2026-01-26T20:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.565707 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.565778 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.565799 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.565826 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.565845 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:24Z","lastTransitionTime":"2026-01-26T20:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.668640 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.668688 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.668700 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.668715 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.668727 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:24Z","lastTransitionTime":"2026-01-26T20:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.771283 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.771329 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.771340 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.771358 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.771371 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:24Z","lastTransitionTime":"2026-01-26T20:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.873545 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.873578 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.873589 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.873604 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.873612 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:24Z","lastTransitionTime":"2026-01-26T20:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.902810 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.902849 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.902858 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.902871 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.902880 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:24Z","lastTransitionTime":"2026-01-26T20:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.916033 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 07:00:55.137745148 +0000 UTC Jan 26 20:56:24 crc kubenswrapper[4899]: E0126 20:56:24.920788 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:24Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.925533 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.925630 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.925684 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.925725 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.925752 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:24Z","lastTransitionTime":"2026-01-26T20:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.931293 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:24 crc kubenswrapper[4899]: E0126 20:56:24.931606 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.932137 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:24 crc kubenswrapper[4899]: E0126 20:56:24.932304 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.932739 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:24 crc kubenswrapper[4899]: E0126 20:56:24.932882 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:24 crc kubenswrapper[4899]: E0126 20:56:24.945158 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:24Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.950899 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.951012 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.951052 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.951192 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.951224 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:24Z","lastTransitionTime":"2026-01-26T20:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:24 crc kubenswrapper[4899]: E0126 20:56:24.973688 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:24Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.979324 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.979371 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.979387 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.979406 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:24 crc kubenswrapper[4899]: I0126 20:56:24.979422 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:24Z","lastTransitionTime":"2026-01-26T20:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:24 crc kubenswrapper[4899]: E0126 20:56:24.999353 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:24Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.004028 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.004073 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.004090 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.004115 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.004134 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:25Z","lastTransitionTime":"2026-01-26T20:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:25 crc kubenswrapper[4899]: E0126 20:56:25.022006 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:25Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:25 crc kubenswrapper[4899]: E0126 20:56:25.022139 4899 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.024505 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.024558 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.024583 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.024610 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.024632 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:25Z","lastTransitionTime":"2026-01-26T20:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.127896 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.127975 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.127993 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.128014 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.128031 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:25Z","lastTransitionTime":"2026-01-26T20:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.230768 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.230802 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.230810 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.230824 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.230832 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:25Z","lastTransitionTime":"2026-01-26T20:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.333487 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.333535 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.333553 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.333572 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.333586 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:25Z","lastTransitionTime":"2026-01-26T20:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.436211 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.436255 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.436272 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.436293 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.436307 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:25Z","lastTransitionTime":"2026-01-26T20:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.539692 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.539762 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.539783 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.540015 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.540048 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:25Z","lastTransitionTime":"2026-01-26T20:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.643155 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.643212 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.643229 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.643252 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.643269 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:25Z","lastTransitionTime":"2026-01-26T20:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.745828 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.746163 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.746401 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.746611 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.746794 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:25Z","lastTransitionTime":"2026-01-26T20:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.850780 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.850836 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.850854 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.850878 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.850896 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:25Z","lastTransitionTime":"2026-01-26T20:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.917016 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 16:50:14.576363615 +0000 UTC Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.930646 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:25 crc kubenswrapper[4899]: E0126 20:56:25.930850 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.953587 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.953652 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.953709 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.953735 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:25 crc kubenswrapper[4899]: I0126 20:56:25.953753 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:25Z","lastTransitionTime":"2026-01-26T20:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.056260 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.056313 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.056328 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.056347 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.056360 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:26Z","lastTransitionTime":"2026-01-26T20:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.159459 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.159494 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.159502 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.159514 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.159523 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:26Z","lastTransitionTime":"2026-01-26T20:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.262530 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.262576 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.262590 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.262611 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.262625 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:26Z","lastTransitionTime":"2026-01-26T20:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.365441 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.365488 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.365501 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.365518 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.365529 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:26Z","lastTransitionTime":"2026-01-26T20:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.469122 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.469243 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.469269 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.469293 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.469310 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:26Z","lastTransitionTime":"2026-01-26T20:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.572245 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.572314 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.572331 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.572355 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.572376 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:26Z","lastTransitionTime":"2026-01-26T20:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.674889 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.675003 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.675018 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.675039 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.675052 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:26Z","lastTransitionTime":"2026-01-26T20:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.777648 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.777917 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.778082 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.778210 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.778327 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:26Z","lastTransitionTime":"2026-01-26T20:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.881660 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.881737 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.881765 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.881795 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.881818 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:26Z","lastTransitionTime":"2026-01-26T20:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.917604 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 03:55:31.207153848 +0000 UTC Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.930193 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.930296 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:26 crc kubenswrapper[4899]: E0126 20:56:26.930361 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.930379 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:26 crc kubenswrapper[4899]: E0126 20:56:26.930522 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:26 crc kubenswrapper[4899]: E0126 20:56:26.930580 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.984170 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.984310 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.984409 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.984527 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:26 crc kubenswrapper[4899]: I0126 20:56:26.984633 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:26Z","lastTransitionTime":"2026-01-26T20:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.086672 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.086709 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.086719 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.086734 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.086744 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:27Z","lastTransitionTime":"2026-01-26T20:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.188701 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.188740 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.188751 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.188767 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.188778 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:27Z","lastTransitionTime":"2026-01-26T20:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.291020 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.291051 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.291060 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.291072 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.291081 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:27Z","lastTransitionTime":"2026-01-26T20:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.392866 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.392959 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.392986 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.393014 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.393036 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:27Z","lastTransitionTime":"2026-01-26T20:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.495333 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.495711 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.495885 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.496139 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.496429 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:27Z","lastTransitionTime":"2026-01-26T20:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.598820 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.598889 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.598915 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.598988 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.599018 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:27Z","lastTransitionTime":"2026-01-26T20:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.702128 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.702225 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.703854 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.703903 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.703957 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:27Z","lastTransitionTime":"2026-01-26T20:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.806708 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.806773 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.806791 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.806817 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.806833 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:27Z","lastTransitionTime":"2026-01-26T20:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.909943 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.909978 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.909989 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.910005 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.910016 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:27Z","lastTransitionTime":"2026-01-26T20:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.918411 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:21:58.977467508 +0000 UTC Jan 26 20:56:27 crc kubenswrapper[4899]: I0126 20:56:27.929695 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:27 crc kubenswrapper[4899]: E0126 20:56:27.929982 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.012711 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.012747 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.012755 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.012770 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.012805 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:28Z","lastTransitionTime":"2026-01-26T20:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.115439 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.115468 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.115495 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.115508 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.115516 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:28Z","lastTransitionTime":"2026-01-26T20:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.218289 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.218327 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.218354 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.218368 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.218377 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:28Z","lastTransitionTime":"2026-01-26T20:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.320275 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.320308 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.320316 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.320328 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.320338 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:28Z","lastTransitionTime":"2026-01-26T20:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.422957 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.423008 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.423042 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.423060 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.423073 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:28Z","lastTransitionTime":"2026-01-26T20:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.526456 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.526525 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.526545 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.526570 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.526588 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:28Z","lastTransitionTime":"2026-01-26T20:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.629637 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.629697 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.629717 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.629742 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.629760 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:28Z","lastTransitionTime":"2026-01-26T20:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.732914 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.733052 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.733073 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.733098 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.733118 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:28Z","lastTransitionTime":"2026-01-26T20:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.837004 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.837121 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.837142 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.837167 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.837187 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:28Z","lastTransitionTime":"2026-01-26T20:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.919258 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 17:36:09.480362376 +0000 UTC Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.929855 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:28 crc kubenswrapper[4899]: E0126 20:56:28.930204 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.930387 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.930452 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:28 crc kubenswrapper[4899]: E0126 20:56:28.930594 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:28 crc kubenswrapper[4899]: E0126 20:56:28.930788 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.939456 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.939547 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.939569 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.939592 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:28 crc kubenswrapper[4899]: I0126 20:56:28.939610 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:28Z","lastTransitionTime":"2026-01-26T20:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.042336 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.042414 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.042438 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.042468 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.042492 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:29Z","lastTransitionTime":"2026-01-26T20:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.144837 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.145005 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.145031 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.145056 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.145074 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:29Z","lastTransitionTime":"2026-01-26T20:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.247453 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.247531 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.247550 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.247575 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.247592 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:29Z","lastTransitionTime":"2026-01-26T20:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.350891 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.350989 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.351007 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.351032 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.351050 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:29Z","lastTransitionTime":"2026-01-26T20:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.453813 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.453863 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.453878 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.453900 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.453914 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:29Z","lastTransitionTime":"2026-01-26T20:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.556491 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.556566 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.556582 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.556599 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.556613 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:29Z","lastTransitionTime":"2026-01-26T20:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.659218 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.659258 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.659270 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.659285 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.659297 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:29Z","lastTransitionTime":"2026-01-26T20:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.761096 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.761130 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.761141 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.761154 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.761165 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:29Z","lastTransitionTime":"2026-01-26T20:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.862952 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.862990 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.863001 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.863047 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.863061 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:29Z","lastTransitionTime":"2026-01-26T20:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.919562 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:28:33.632591805 +0000 UTC Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.930095 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:29 crc kubenswrapper[4899]: E0126 20:56:29.930229 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.965373 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.965442 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.965469 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.965499 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:29 crc kubenswrapper[4899]: I0126 20:56:29.965525 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:29Z","lastTransitionTime":"2026-01-26T20:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.068538 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.068602 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.068623 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.068653 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.068672 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:30Z","lastTransitionTime":"2026-01-26T20:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.171266 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.171347 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.171376 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.171410 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.171436 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:30Z","lastTransitionTime":"2026-01-26T20:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.273409 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.273452 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.273463 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.273477 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.273485 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:30Z","lastTransitionTime":"2026-01-26T20:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.376029 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.376081 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.376097 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.376120 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.376134 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:30Z","lastTransitionTime":"2026-01-26T20:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.478022 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.478272 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.478383 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.478478 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.478564 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:30Z","lastTransitionTime":"2026-01-26T20:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.580836 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.580887 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.580904 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.580960 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.580979 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:30Z","lastTransitionTime":"2026-01-26T20:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.683378 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.683429 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.683444 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.683463 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.683478 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:30Z","lastTransitionTime":"2026-01-26T20:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.788219 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.788292 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.788309 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.788332 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.788349 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:30Z","lastTransitionTime":"2026-01-26T20:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.892948 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.893022 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.893037 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.893054 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.893066 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:30Z","lastTransitionTime":"2026-01-26T20:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.920583 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:54:35.997787321 +0000 UTC Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.930140 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.930229 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.930356 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:30 crc kubenswrapper[4899]: E0126 20:56:30.930416 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:30 crc kubenswrapper[4899]: E0126 20:56:30.930514 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:30 crc kubenswrapper[4899]: E0126 20:56:30.930551 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.948270 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\".go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0126 20:56:03.790087 6613 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0126 20:56:03.789987 6613 ovnkube.go:599] Stopped ovnkube\\\\nI0126 20:56:03.790028 6613 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vlmbq after 0 failed attempt(s)\\\\nI0126 20:56:03.790101 6613 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:56:03.790129 6613 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 20:56:03.789733 6613 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5s8xd\\\\nI0126 20:56:03.790199 6613 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5s8xd in node crc\\\\nF0126 20:56:03.790202 6613 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:56:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:30Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.960802 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f199371a-c546-4a47-b96b-be3768b02b36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6337a3488685aada7e51eb60802f0e97119ef23a01e08db83d1950da3b2755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac177559351b6d0279c6e1a62bb0bcf85ccbd161b8ac29079d7b463b964402b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ec6ab43e988842fc52ebf3b5f8e16b54cceabf1cadbde4641af6ee3c2fe837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:30Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.978190 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:30Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.996309 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.996398 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.996418 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.996473 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.996491 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:30Z","lastTransitionTime":"2026-01-26T20:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:30 crc kubenswrapper[4899]: I0126 20:56:30.999495 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:30Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.019336 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.035311 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.048562 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.084303 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.099117 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.099165 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.099179 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.099200 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.099216 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:31Z","lastTransitionTime":"2026-01-26T20:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.102285 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.117224 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.131594 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.147684 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67d24e714a631cf429c1b815c09cd81ec904b03b09d1bcab788de52458822f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:19Z\\\",\\\"message\\\":\\\"2026-01-26T20:55:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_89ee65d1-b1d9-44d2-beb7-723869cffad8\\\\n2026-01-26T20:55:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_89ee65d1-b1d9-44d2-beb7-723869cffad8 to /host/opt/cni/bin/\\\\n2026-01-26T20:55:33Z [verbose] multus-daemon started\\\\n2026-01-26T20:55:33Z [verbose] Readiness Indicator file check\\\\n2026-01-26T20:56:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.166704 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.180216 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.194771 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.204777 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.204835 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.204854 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.204881 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.204899 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:31Z","lastTransitionTime":"2026-01-26T20:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.208942 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.220990 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.307397 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.307463 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.307473 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.307488 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.307516 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:31Z","lastTransitionTime":"2026-01-26T20:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.409214 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.409264 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.409277 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.409295 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.409308 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:31Z","lastTransitionTime":"2026-01-26T20:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.513236 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.513316 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.513339 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.513373 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.513434 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:31Z","lastTransitionTime":"2026-01-26T20:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.616666 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.616733 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.616756 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.616784 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.616810 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:31Z","lastTransitionTime":"2026-01-26T20:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.721125 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.721187 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.721209 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.721237 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.721262 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:31Z","lastTransitionTime":"2026-01-26T20:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.824317 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.824363 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.824380 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.824405 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.824422 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:31Z","lastTransitionTime":"2026-01-26T20:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.920994 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:16:08.212144263 +0000 UTC Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.927902 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.928067 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.928456 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.928542 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.928820 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:31Z","lastTransitionTime":"2026-01-26T20:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.930385 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:31 crc kubenswrapper[4899]: E0126 20:56:31.930574 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:31 crc kubenswrapper[4899]: I0126 20:56:31.931595 4899 scope.go:117] "RemoveContainer" containerID="96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.031511 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.031544 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.031555 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.031570 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.031581 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:32Z","lastTransitionTime":"2026-01-26T20:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.134448 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.134498 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.134516 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.134537 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.134552 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:32Z","lastTransitionTime":"2026-01-26T20:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.239005 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.239109 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.239186 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.239217 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.239281 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:32Z","lastTransitionTime":"2026-01-26T20:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.344486 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.344547 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.344567 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.344593 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.344614 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:32Z","lastTransitionTime":"2026-01-26T20:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.447222 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.447286 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.447302 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.447322 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.447337 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:32Z","lastTransitionTime":"2026-01-26T20:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.549583 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.549642 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.549659 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.549682 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.549700 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:32Z","lastTransitionTime":"2026-01-26T20:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.652910 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.652995 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.653028 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.653067 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.653089 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:32Z","lastTransitionTime":"2026-01-26T20:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.755881 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.755955 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.755973 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.755999 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.756017 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:32Z","lastTransitionTime":"2026-01-26T20:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.859165 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.859219 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.859236 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.859260 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.859278 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:32Z","lastTransitionTime":"2026-01-26T20:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.921777 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:45:49.449547491 +0000 UTC Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.930176 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.930392 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:32 crc kubenswrapper[4899]: E0126 20:56:32.930478 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.930521 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:32 crc kubenswrapper[4899]: E0126 20:56:32.930742 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:32 crc kubenswrapper[4899]: E0126 20:56:32.930816 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.964032 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.964118 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.964155 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.964189 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:32 crc kubenswrapper[4899]: I0126 20:56:32.964212 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:32Z","lastTransitionTime":"2026-01-26T20:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.067160 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.067445 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.067455 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.067470 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.067480 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:33Z","lastTransitionTime":"2026-01-26T20:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.170247 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.170283 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.170293 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.170309 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.170322 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:33Z","lastTransitionTime":"2026-01-26T20:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.272576 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.272619 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.272632 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.272647 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.272656 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:33Z","lastTransitionTime":"2026-01-26T20:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.375265 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.375296 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.375304 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.375316 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.375328 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:33Z","lastTransitionTime":"2026-01-26T20:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.413823 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovnkube-controller/2.log" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.415843 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerStarted","Data":"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e"} Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.416184 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.426948 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.437229 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.448226 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.460582 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.470973 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.477277 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.477302 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.477326 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.477355 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.477364 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:33Z","lastTransitionTime":"2026-01-26T20:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.482003 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.489739 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.510737 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\".go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0126 20:56:03.790087 6613 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0126 20:56:03.789987 6613 ovnkube.go:599] Stopped ovnkube\\\\nI0126 20:56:03.790028 6613 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vlmbq after 0 failed attempt(s)\\\\nI0126 20:56:03.790101 6613 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:56:03.790129 6613 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 20:56:03.789733 6613 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5s8xd\\\\nI0126 20:56:03.790199 6613 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5s8xd in node crc\\\\nF0126 20:56:03.790202 6613 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:56:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.520246 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f199371a-c546-4a47-b96b-be3768b02b36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6337a3488685aada7e51eb60802f0e97119ef23a01e08db83d1950da3b2755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac177559351b6d0279c6e1a62bb0bcf85ccbd161b8ac29079d7b463b964402b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ec6ab43e988842fc52ebf3b5f8e16b54cceabf1cadbde4641af6ee3c2fe837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.530961 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.539382 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.547563 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.556872 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.567902 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.575529 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.578839 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.578868 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.578879 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.578894 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.578905 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:33Z","lastTransitionTime":"2026-01-26T20:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.585237 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.594962 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67d24e714a631cf429c1b815c09cd81ec904b03b09d1bcab788de52458822f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:19Z\\\",\\\"message\\\":\\\"2026-01-26T20:55:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_89ee65d1-b1d9-44d2-beb7-723869cffad8\\\\n2026-01-26T20:55:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_89ee65d1-b1d9-44d2-beb7-723869cffad8 to /host/opt/cni/bin/\\\\n2026-01-26T20:55:33Z [verbose] multus-daemon started\\\\n2026-01-26T20:55:33Z [verbose] Readiness Indicator file check\\\\n2026-01-26T20:56:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:33Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.682432 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.682493 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.682511 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.682537 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.682554 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:33Z","lastTransitionTime":"2026-01-26T20:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.786053 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.786123 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.786148 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.786178 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.786197 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:33Z","lastTransitionTime":"2026-01-26T20:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.889542 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.889588 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.889599 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.889613 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.889624 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:33Z","lastTransitionTime":"2026-01-26T20:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.922410 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:08:43.535552894 +0000 UTC Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.929861 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:33 crc kubenswrapper[4899]: E0126 20:56:33.930095 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.991761 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.992130 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.992358 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.992574 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:33 crc kubenswrapper[4899]: I0126 20:56:33.992765 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:33Z","lastTransitionTime":"2026-01-26T20:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.096139 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.096211 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.096234 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.096271 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.096293 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:34Z","lastTransitionTime":"2026-01-26T20:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.198829 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.198880 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.198897 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.198919 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.198960 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:34Z","lastTransitionTime":"2026-01-26T20:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.302143 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.302197 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.302213 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.302240 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.302257 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:34Z","lastTransitionTime":"2026-01-26T20:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.405487 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.405560 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.405583 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.405614 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.405636 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:34Z","lastTransitionTime":"2026-01-26T20:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.423363 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovnkube-controller/3.log" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.424435 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovnkube-controller/2.log" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.428453 4899 generic.go:334] "Generic (PLEG): container finished" podID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerID="d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e" exitCode=1 Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.428510 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerDied","Data":"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e"} Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.428563 4899 scope.go:117] "RemoveContainer" containerID="96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.430716 4899 scope.go:117] "RemoveContainer" containerID="d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e" Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.431030 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.454415 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.476472 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.488108 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.500977 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f199371a-c546-4a47-b96b-be3768b02b36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6337a3488685aada7e51eb60802f0e97119ef23a01e08db83d1950da3b2755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac177559351b6d0279c6e1a62bb0bcf85ccbd161b8ac29079d7b463b964402b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ec6ab43e988842fc52ebf3b5f8e16b54cceabf1cadbde4641af6ee3c2fe837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.509200 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.509233 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.509243 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.509260 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.509271 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:34Z","lastTransitionTime":"2026-01-26T20:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.521126 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.536457 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.550677 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.564408 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.575770 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.595883 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96f6422fa047163dcab4c35122579972ac20f54e279bfc7ee599594dc9697c64\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:04Z\\\",\\\"message\\\":\\\".go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0126 20:56:03.790087 6613 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0126 20:56:03.789987 6613 ovnkube.go:599] Stopped ovnkube\\\\nI0126 20:56:03.790028 6613 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vlmbq after 0 failed attempt(s)\\\\nI0126 20:56:03.790101 6613 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 20:56:03.790129 6613 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 20:56:03.789733 6613 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5s8xd\\\\nI0126 20:56:03.790199 6613 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5s8xd in node crc\\\\nF0126 20:56:03.790202 6613 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:56:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:34Z\\\",\\\"message\\\":\\\"lf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4 openshift-image-registry/node-ca-t8lnv openshift-multus/multus-additional-cni-plugins-bpfpb openshift-network-node-identity/network-node-identity-vrzqb openshift-machine-config-operator/machine-config-daemon-wwvzr openshift-network-diagnostics/network-check-target-xd92c openshift-dns/node-resolver-vlmbq]\\\\nI0126 20:56:33.494585 7026 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0126 20:56:33.494601 7026 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vlmbq\\\\nI0126 20:56:33.494611 7026 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-vlmbq\\\\nI0126 20:56:33.494618 7026 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-vlmbq in node crc\\\\nI0126 20:56:33.494623 7026 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vlmbq after 0 failed attempt(s)\\\\nI0126 20:56:33.494629 7026 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-vlmbq\\\\nI0126 20:56:33.494644 7026 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 20:56:33.494702 7026 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.613976 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.615574 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.615612 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.615621 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.615645 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.615654 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:34Z","lastTransitionTime":"2026-01-26T20:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.627921 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.639634 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.655719 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.677165 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67d24e714a631cf429c1b815c09cd81ec904b03b09d1bcab788de52458822f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:19Z\\\",\\\"message\\\":\\\"2026-01-26T20:55:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_89ee65d1-b1d9-44d2-beb7-723869cffad8\\\\n2026-01-26T20:55:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_89ee65d1-b1d9-44d2-beb7-723869cffad8 to /host/opt/cni/bin/\\\\n2026-01-26T20:55:33Z [verbose] multus-daemon started\\\\n2026-01-26T20:55:33Z [verbose] Readiness Indicator file check\\\\n2026-01-26T20:56:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.698153 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.713071 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.718003 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.718034 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.718043 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.718056 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.718066 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:34Z","lastTransitionTime":"2026-01-26T20:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.812742 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.812978 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:38.812907149 +0000 UTC m=+148.194495226 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.820837 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.820895 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.820913 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.820990 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.821010 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:34Z","lastTransitionTime":"2026-01-26T20:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.914625 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.914730 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.914775 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.914828 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.914989 4899 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.915085 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 20:57:38.915057133 +0000 UTC m=+148.296645210 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.915595 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.915634 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.915658 4899 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.915652 4899 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.915725 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 20:57:38.915707402 +0000 UTC m=+148.297295479 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.915785 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 20:57:38.915745844 +0000 UTC m=+148.297334061 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.916223 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.916372 4899 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.916491 4899 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.916842 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 20:57:38.916816016 +0000 UTC m=+148.298404083 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.922679 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 17:16:18.766537812 +0000 UTC Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.924312 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.925000 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.925203 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.925349 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.925499 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:34Z","lastTransitionTime":"2026-01-26T20:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.929802 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.929836 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.930311 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:34 crc kubenswrapper[4899]: I0126 20:56:34.929875 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.930422 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:34 crc kubenswrapper[4899]: E0126 20:56:34.930895 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.029074 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.029147 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.029168 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.029202 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.029226 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:35Z","lastTransitionTime":"2026-01-26T20:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.133088 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.133152 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.133171 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.133198 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.133216 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:35Z","lastTransitionTime":"2026-01-26T20:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.236522 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.236810 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.236965 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.237073 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.237156 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:35Z","lastTransitionTime":"2026-01-26T20:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.340127 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.340195 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.340213 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.340258 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.340276 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:35Z","lastTransitionTime":"2026-01-26T20:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.363241 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.363576 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.363647 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.363725 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.363783 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:35Z","lastTransitionTime":"2026-01-26T20:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:35 crc kubenswrapper[4899]: E0126 20:56:35.383303 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.388705 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.388773 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.388791 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.388815 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.388831 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:35Z","lastTransitionTime":"2026-01-26T20:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:35 crc kubenswrapper[4899]: E0126 20:56:35.408402 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.413509 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.413555 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.413574 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.413598 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.413615 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:35Z","lastTransitionTime":"2026-01-26T20:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:35 crc kubenswrapper[4899]: E0126 20:56:35.433444 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.434693 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovnkube-controller/3.log" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.438474 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.438542 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.438563 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.438585 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.438606 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:35Z","lastTransitionTime":"2026-01-26T20:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.441443 4899 scope.go:117] "RemoveContainer" containerID="d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e" Jan 26 20:56:35 crc kubenswrapper[4899]: E0126 20:56:35.441791 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" Jan 26 20:56:35 crc kubenswrapper[4899]: E0126 20:56:35.459422 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.462168 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.463457 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.463524 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.463541 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.463566 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.463584 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:35Z","lastTransitionTime":"2026-01-26T20:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.473299 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: E0126 20:56:35.483476 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: E0126 20:56:35.484081 4899 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.486217 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.486246 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.486256 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.486271 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.486285 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:35Z","lastTransitionTime":"2026-01-26T20:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.492260 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.504971 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.517425 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.531971 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.543392 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.556238 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.577260 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:34Z\\\",\\\"message\\\":\\\"lf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4 openshift-image-registry/node-ca-t8lnv openshift-multus/multus-additional-cni-plugins-bpfpb openshift-network-node-identity/network-node-identity-vrzqb openshift-machine-config-operator/machine-config-daemon-wwvzr openshift-network-diagnostics/network-check-target-xd92c openshift-dns/node-resolver-vlmbq]\\\\nI0126 20:56:33.494585 7026 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0126 20:56:33.494601 7026 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vlmbq\\\\nI0126 20:56:33.494611 7026 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-vlmbq\\\\nI0126 20:56:33.494618 7026 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-vlmbq in node crc\\\\nI0126 20:56:33.494623 7026 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vlmbq after 0 failed attempt(s)\\\\nI0126 20:56:33.494629 7026 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-vlmbq\\\\nI0126 20:56:33.494644 7026 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 20:56:33.494702 7026 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:56:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.588512 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.588563 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.588572 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.588585 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.588609 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:35Z","lastTransitionTime":"2026-01-26T20:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.592447 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f199371a-c546-4a47-b96b-be3768b02b36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6337a3488685aada7e51eb60802f0e97119ef23a01e08db83d1950da3b2755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac177559351b6d0279c6e1a62bb0bcf85ccbd161b8ac29079d7b463b964402b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ec6ab43e988842fc52ebf3b5f8e16b54cceabf1cadbde4641af6ee3c2fe837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.602890 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.613454 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.625965 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.638044 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67d24e714a631cf429c1b815c09cd81ec904b03b09d1bcab788de52458822f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:19Z\\\",\\\"message\\\":\\\"2026-01-26T20:55:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_89ee65d1-b1d9-44d2-beb7-723869cffad8\\\\n2026-01-26T20:55:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_89ee65d1-b1d9-44d2-beb7-723869cffad8 to /host/opt/cni/bin/\\\\n2026-01-26T20:55:33Z [verbose] multus-daemon started\\\\n2026-01-26T20:55:33Z [verbose] Readiness Indicator file check\\\\n2026-01-26T20:56:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.650652 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.662725 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.673666 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:35Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.690756 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.690807 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.690819 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.690838 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.690988 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:35Z","lastTransitionTime":"2026-01-26T20:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.793608 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.793643 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.793651 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.793664 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.793673 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:35Z","lastTransitionTime":"2026-01-26T20:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.896574 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.896638 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.896656 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.896681 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.896702 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:35Z","lastTransitionTime":"2026-01-26T20:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.923022 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:00:52.139135776 +0000 UTC Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.930025 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:35 crc kubenswrapper[4899]: E0126 20:56:35.930288 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:35 crc kubenswrapper[4899]: I0126 20:56:35.962519 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.001251 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.001293 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.001305 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.001321 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.001334 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:36Z","lastTransitionTime":"2026-01-26T20:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.103132 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.103161 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.103172 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.103186 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.103211 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:36Z","lastTransitionTime":"2026-01-26T20:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.205894 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.205962 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.205975 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.205993 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.206005 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:36Z","lastTransitionTime":"2026-01-26T20:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.308509 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.308575 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.308600 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.308632 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.308657 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:36Z","lastTransitionTime":"2026-01-26T20:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.411853 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.411962 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.411990 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.412018 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.412037 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:36Z","lastTransitionTime":"2026-01-26T20:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.514877 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.514988 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.515013 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.515038 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.515056 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:36Z","lastTransitionTime":"2026-01-26T20:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.618762 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.618797 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.618808 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.618823 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.618834 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:36Z","lastTransitionTime":"2026-01-26T20:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.723315 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.723371 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.723385 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.723405 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.723420 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:36Z","lastTransitionTime":"2026-01-26T20:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.826325 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.826355 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.826367 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.826383 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.826395 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:36Z","lastTransitionTime":"2026-01-26T20:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.924168 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:54:31.292546628 +0000 UTC Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.929123 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.929184 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.929208 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.929238 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.929261 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:36Z","lastTransitionTime":"2026-01-26T20:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.930274 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.930317 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:36 crc kubenswrapper[4899]: E0126 20:56:36.930392 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:36 crc kubenswrapper[4899]: E0126 20:56:36.930485 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:36 crc kubenswrapper[4899]: I0126 20:56:36.930525 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:36 crc kubenswrapper[4899]: E0126 20:56:36.930631 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.032594 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.032686 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.032704 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.032759 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.032778 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:37Z","lastTransitionTime":"2026-01-26T20:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.136061 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.136095 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.136106 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.136122 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.136133 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:37Z","lastTransitionTime":"2026-01-26T20:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.238615 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.238689 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.238716 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.238748 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.238767 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:37Z","lastTransitionTime":"2026-01-26T20:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.342256 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.342317 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.342335 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.342360 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.342378 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:37Z","lastTransitionTime":"2026-01-26T20:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.446275 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.446337 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.446352 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.446373 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.446388 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:37Z","lastTransitionTime":"2026-01-26T20:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.548381 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.548419 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.548427 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.548439 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.548448 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:37Z","lastTransitionTime":"2026-01-26T20:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.651793 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.651833 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.651844 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.651860 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.651872 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:37Z","lastTransitionTime":"2026-01-26T20:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.755414 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.755456 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.755470 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.755485 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.755496 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:37Z","lastTransitionTime":"2026-01-26T20:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.858364 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.858399 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.858410 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.858426 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.858437 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:37Z","lastTransitionTime":"2026-01-26T20:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.924430 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 00:46:40.095962018 +0000 UTC Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.930013 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:37 crc kubenswrapper[4899]: E0126 20:56:37.930217 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.961464 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.961526 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.961545 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.961571 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:37 crc kubenswrapper[4899]: I0126 20:56:37.961588 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:37Z","lastTransitionTime":"2026-01-26T20:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.064599 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.064636 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.064644 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.064697 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.064708 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:38Z","lastTransitionTime":"2026-01-26T20:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.167782 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.167846 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.167868 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.167897 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.167921 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:38Z","lastTransitionTime":"2026-01-26T20:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.270856 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.270917 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.270985 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.271013 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.271030 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:38Z","lastTransitionTime":"2026-01-26T20:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.374089 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.374158 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.374181 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.374212 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.374231 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:38Z","lastTransitionTime":"2026-01-26T20:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.476707 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.476787 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.476819 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.476850 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.476872 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:38Z","lastTransitionTime":"2026-01-26T20:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.579423 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.579499 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.579514 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.579531 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.579543 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:38Z","lastTransitionTime":"2026-01-26T20:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.681634 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.681696 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.681722 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.681755 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.681781 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:38Z","lastTransitionTime":"2026-01-26T20:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.785027 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.785080 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.785098 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.785121 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.785138 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:38Z","lastTransitionTime":"2026-01-26T20:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.888583 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.888652 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.888673 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.888699 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.888719 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:38Z","lastTransitionTime":"2026-01-26T20:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.925463 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 22:28:33.582588015 +0000 UTC Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.930160 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.930161 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:38 crc kubenswrapper[4899]: E0126 20:56:38.930379 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.930186 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:38 crc kubenswrapper[4899]: E0126 20:56:38.930453 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:38 crc kubenswrapper[4899]: E0126 20:56:38.930636 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.991166 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.991211 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.991222 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.991241 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:38 crc kubenswrapper[4899]: I0126 20:56:38.991253 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:38Z","lastTransitionTime":"2026-01-26T20:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.094343 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.094389 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.094404 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.094426 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.094440 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:39Z","lastTransitionTime":"2026-01-26T20:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.197307 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.197366 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.197387 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.197413 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.197432 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:39Z","lastTransitionTime":"2026-01-26T20:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.300681 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.300766 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.300793 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.300825 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.300848 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:39Z","lastTransitionTime":"2026-01-26T20:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.404858 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.404910 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.404972 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.405005 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.405027 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:39Z","lastTransitionTime":"2026-01-26T20:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.508624 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.508680 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.508697 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.509040 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.509097 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:39Z","lastTransitionTime":"2026-01-26T20:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.620020 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.620083 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.620107 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.620137 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.620167 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:39Z","lastTransitionTime":"2026-01-26T20:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.723807 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.723877 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.723900 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.723971 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.723999 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:39Z","lastTransitionTime":"2026-01-26T20:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.827305 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.827360 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.827380 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.827403 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.827423 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:39Z","lastTransitionTime":"2026-01-26T20:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.926628 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 03:28:03.575939921 +0000 UTC Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.929859 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:39 crc kubenswrapper[4899]: E0126 20:56:39.930007 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.930227 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.930296 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.930329 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.930360 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:39 crc kubenswrapper[4899]: I0126 20:56:39.930383 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:39Z","lastTransitionTime":"2026-01-26T20:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.032461 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.032522 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.032540 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.032568 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.032588 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:40Z","lastTransitionTime":"2026-01-26T20:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.139467 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.139916 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.140107 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.140243 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.140427 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:40Z","lastTransitionTime":"2026-01-26T20:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.243409 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.243486 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.243509 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.243539 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.243561 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:40Z","lastTransitionTime":"2026-01-26T20:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.345799 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.345853 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.345870 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.345892 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.345908 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:40Z","lastTransitionTime":"2026-01-26T20:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.449035 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.449107 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.449125 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.449147 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.449164 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:40Z","lastTransitionTime":"2026-01-26T20:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.552960 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.553324 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.553499 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.553686 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.553840 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:40Z","lastTransitionTime":"2026-01-26T20:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.657308 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.657345 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.657353 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.657369 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.657380 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:40Z","lastTransitionTime":"2026-01-26T20:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.760612 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.760677 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.760695 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.760721 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.760738 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:40Z","lastTransitionTime":"2026-01-26T20:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.863681 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.863724 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.863740 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.863764 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.863781 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:40Z","lastTransitionTime":"2026-01-26T20:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.926759 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 11:54:18.969746249 +0000 UTC Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.930353 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.930445 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:40 crc kubenswrapper[4899]: E0126 20:56:40.930552 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.930809 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:40 crc kubenswrapper[4899]: E0126 20:56:40.930885 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:40 crc kubenswrapper[4899]: E0126 20:56:40.931317 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.943488 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f49476-befa-4689-91cb-c0a8cc1def3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5s8xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.962299 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.966566 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.966617 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.966640 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.966672 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.966694 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:40Z","lastTransitionTime":"2026-01-26T20:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:40 crc kubenswrapper[4899]: I0126 20:56:40.977397 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24sf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595ae596-1477-4438-94f7-69400dc1ba20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67d24e714a631cf429c1b815c09cd81ec904b03b09d1bcab788de52458822f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:19Z\\\",\\\"message\\\":\\\"2026-01-26T20:55:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_89ee65d1-b1d9-44d2-beb7-723869cffad8\\\\n2026-01-26T20:55:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_89ee65d1-b1d9-44d2-beb7-723869cffad8 to /host/opt/cni/bin/\\\\n2026-01-26T20:55:33Z [verbose] multus-daemon started\\\\n2026-01-26T20:55:33Z [verbose] Readiness Indicator file check\\\\n2026-01-26T20:56:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24sf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.001393 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb93604e-ad41-45c0-959d-1af0694fd11d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492df90ed5d2b80b46cecd17801503e90b604fead44c065c49528c9690b4a160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593c3255c752b7ee3ae77477b31ebcfe976ae93047ff8c23dc0d47e42af3df10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef155d009af096c55d1ac2ed9b3a217bf2f26072de02efde1a31020a1f94005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deae9716c19e97774f6ca334a85e0bf8fe12ea98aca8fc7d10089c4d211ba96b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a579cdbfb4ae9f09d61019939680fa9eafbfbb7fa9e9d2f81922ee555cc0878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5993a3eb248e48c89e610130ad74d4a7faef70664d8e29a7251c4bfb33cf8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e06d89c076a2a5a9f7c34a9132b2c8641c89685b44703287137562f320f3164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lk7ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bpfpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:40Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.017154 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8lnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3eee89-3332-4ac0-8c40-c7b77bfd9ee7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7209fc7f042828e6cdf76b10964b197726562e5072cebee924bec5a423b899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8lnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.035596 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4694410b-89a1-4c5d-afd1-41184a083c4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f07eec553da37f741c9fec74065a29e2440a65ecd2237c83578405d875c3af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6d6af1273edb7ad3dab261f303c4b0cd46eadfa95b090d0f0228288a0f5ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c836b07b87bce8bd42428236ae06094682107933ba8077fc41a66ff1a3a4313e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.055814 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a3864c90c098e87bad97c494ccd8d480b803ade00231085e586beadfec829f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a4a07792a24b6fc54cbeae9a1d621ea9a967b623ebf62f4e528e6bae9e1768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.069580 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.069646 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.069670 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.069700 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.069722 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:41Z","lastTransitionTime":"2026-01-26T20:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.076660 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.096048 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dc40390aea454ec89d457bafc32e87e6f187691c00d5f18effa058e5b83401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.113331 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vlmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eb474cc-d8b2-4d69-a738-90b30e635e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e0cf25e3d1893798492ce70e3fdd84bf763811e8183dbabe72dfe454c1ae83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vlmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.136977 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30d7d720-d73a-488d-b6ec-755f5da1888c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T20:56:34Z\\\",\\\"message\\\":\\\"lf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4 openshift-image-registry/node-ca-t8lnv openshift-multus/multus-additional-cni-plugins-bpfpb openshift-network-node-identity/network-node-identity-vrzqb openshift-machine-config-operator/machine-config-daemon-wwvzr openshift-network-diagnostics/network-check-target-xd92c openshift-dns/node-resolver-vlmbq]\\\\nI0126 20:56:33.494585 7026 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0126 20:56:33.494601 7026 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vlmbq\\\\nI0126 20:56:33.494611 7026 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-vlmbq\\\\nI0126 20:56:33.494618 7026 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-vlmbq in node crc\\\\nI0126 20:56:33.494623 7026 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vlmbq after 0 failed attempt(s)\\\\nI0126 20:56:33.494629 7026 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-vlmbq\\\\nI0126 20:56:33.494644 7026 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 20:56:33.494702 7026 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:56:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt664\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrvcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.154396 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f199371a-c546-4a47-b96b-be3768b02b36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6337a3488685aada7e51eb60802f0e97119ef23a01e08db83d1950da3b2755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac177559351b6d0279c6e1a62bb0bcf85ccbd161b8ac29079d7b463b964402b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ec6ab43e988842fc52ebf3b5f8e16b54cceabf1cadbde4641af6ee3c2fe837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://532c60292c16ce82b7933d48a6fbab526a7bec434d69c34aa28c6b47eb269105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.172995 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T20:55:29Z\\\",\\\"message\\\":\\\"questheader-client-ca-file\\\\\\\"\\\\nI0126 20:55:29.616707 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 20:55:29.616834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0126 20:55:29.616845 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 20:55:29.616963 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769460913\\\\\\\\\\\\\\\" (2026-01-26 20:55:12 +0000 UTC to 2026-02-25 20:55:13 +0000 UTC (now=2026-01-26 20:55:29.61689709 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617179 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769460923\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769460923\\\\\\\\\\\\\\\" (2026-01-26 19:55:23 +0000 UTC to 2027-01-26 19:55:23 +0000 UTC (now=2026-01-26 20:55:29.617145947 +0000 UTC))\\\\\\\"\\\\nI0126 20:55:29.617214 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0126 20:55:29.617251 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0126 20:55:29.617329 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3665950671/tls.crt::/tmp/serving-cert-3665950671/tls.key\\\\\\\"\\\\nI0126 20:55:29.617498 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0126 20:55:29.621096 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0126 20:55:29.621695 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0126 20:55:29.622634 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.174603 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.174708 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.174725 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.174750 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.174823 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:41Z","lastTransitionTime":"2026-01-26T20:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.194704 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6271dc74170107c71f3dc327b9ea1b80f7c915a2cd9244b9e007bb2569ef0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.209518 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2285c985-da54-4035-b72d-06f9c067f463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3210b07af5e37a5412fb56e91e6cfe66b3ac1c11006b2280ef9d749f63248c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be8c775651b32201f9a542dfcdc99669ce29a9cdc1f913a73d9abe5e4d368dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2thn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vl6k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.223065 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32d79537-74d2-4f2c-998c-7cc9d836cbae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe78673c3c9d93a82c34200cda3ec05c07d2b88c77242644fb81bfb8823589b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04957116bc47667fa31fa0df4d91ab9b03496c10ae8ff3964d5a3814d37fd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04957116bc47667fa31fa0df4d91ab9b03496c10ae8ff3964d5a3814d37fd374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T20:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T20:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.238435 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.252826 4899 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af2334b6-f4a1-489a-acb2-0ddef342559d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T20:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aedabdb2a4e2d458f87009fa8d67bea0f079d46f5d060d4d1f3142efe8777623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T20:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5n4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T20:55:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.277906 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.278026 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.278052 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.278084 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.278107 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:41Z","lastTransitionTime":"2026-01-26T20:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.380771 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.380825 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.380842 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.380867 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.380885 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:41Z","lastTransitionTime":"2026-01-26T20:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.486544 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.486595 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.486606 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.486623 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.486635 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:41Z","lastTransitionTime":"2026-01-26T20:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.589873 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.589967 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.589986 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.590010 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.590029 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:41Z","lastTransitionTime":"2026-01-26T20:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.692973 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.693054 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.693074 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.693101 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.693120 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:41Z","lastTransitionTime":"2026-01-26T20:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.796049 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.796100 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.796117 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.796140 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.796158 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:41Z","lastTransitionTime":"2026-01-26T20:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.898230 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.898263 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.898273 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.898285 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.898294 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:41Z","lastTransitionTime":"2026-01-26T20:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.927463 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:25:34.278394842 +0000 UTC Jan 26 20:56:41 crc kubenswrapper[4899]: I0126 20:56:41.929813 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:41 crc kubenswrapper[4899]: E0126 20:56:41.929914 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.001694 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.001886 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.001920 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.002022 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.002107 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:42Z","lastTransitionTime":"2026-01-26T20:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.105711 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.105762 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.105779 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.105803 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.105824 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:42Z","lastTransitionTime":"2026-01-26T20:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.209399 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.209458 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.209470 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.209484 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.209495 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:42Z","lastTransitionTime":"2026-01-26T20:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.311563 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.311620 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.311636 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.311660 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.311678 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:42Z","lastTransitionTime":"2026-01-26T20:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.415342 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.415431 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.415464 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.415497 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.415520 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:42Z","lastTransitionTime":"2026-01-26T20:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.518195 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.518258 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.518275 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.518301 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.518331 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:42Z","lastTransitionTime":"2026-01-26T20:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.621568 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.621691 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.621718 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.621751 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.621773 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:42Z","lastTransitionTime":"2026-01-26T20:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.724557 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.724612 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.724628 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.724649 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.724665 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:42Z","lastTransitionTime":"2026-01-26T20:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.827774 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.827832 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.827850 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.827874 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.827896 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:42Z","lastTransitionTime":"2026-01-26T20:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.927639 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 07:20:46.567256768 +0000 UTC Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.929698 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.929731 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.929759 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:42 crc kubenswrapper[4899]: E0126 20:56:42.930048 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:42 crc kubenswrapper[4899]: E0126 20:56:42.930331 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:42 crc kubenswrapper[4899]: E0126 20:56:42.930809 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.931194 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.931229 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.931241 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.931257 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:42 crc kubenswrapper[4899]: I0126 20:56:42.931269 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:42Z","lastTransitionTime":"2026-01-26T20:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.033895 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.034010 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.034036 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.034065 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.034085 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:43Z","lastTransitionTime":"2026-01-26T20:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.137285 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.137343 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.137355 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.137378 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.137390 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:43Z","lastTransitionTime":"2026-01-26T20:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.239903 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.239964 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.239977 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.239994 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.240008 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:43Z","lastTransitionTime":"2026-01-26T20:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.343001 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.343043 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.343054 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.343068 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.343077 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:43Z","lastTransitionTime":"2026-01-26T20:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.445310 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.445351 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.445360 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.445377 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.445387 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:43Z","lastTransitionTime":"2026-01-26T20:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.548009 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.548050 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.548061 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.548077 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.548089 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:43Z","lastTransitionTime":"2026-01-26T20:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.651552 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.651592 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.651603 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.651619 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.651629 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:43Z","lastTransitionTime":"2026-01-26T20:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.754651 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.754700 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.754715 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.754735 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.754750 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:43Z","lastTransitionTime":"2026-01-26T20:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.857473 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.857536 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.857553 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.857584 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.857603 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:43Z","lastTransitionTime":"2026-01-26T20:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.928532 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:34:31.253378161 +0000 UTC Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.929779 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:43 crc kubenswrapper[4899]: E0126 20:56:43.930124 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.960424 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.960482 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.960501 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.960526 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:43 crc kubenswrapper[4899]: I0126 20:56:43.960545 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:43Z","lastTransitionTime":"2026-01-26T20:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.064204 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.064291 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.064315 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.064344 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.064367 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:44Z","lastTransitionTime":"2026-01-26T20:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.166784 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.166830 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.166845 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.166863 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.166877 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:44Z","lastTransitionTime":"2026-01-26T20:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.269175 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.269215 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.269225 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.269241 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.269252 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:44Z","lastTransitionTime":"2026-01-26T20:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.371099 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.371135 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.371149 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.371163 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.371174 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:44Z","lastTransitionTime":"2026-01-26T20:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.472573 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.472623 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.472634 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.472651 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.472662 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:44Z","lastTransitionTime":"2026-01-26T20:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.574981 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.575062 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.575089 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.575122 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.575145 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:44Z","lastTransitionTime":"2026-01-26T20:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.677576 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.677609 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.677619 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.677635 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.677648 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:44Z","lastTransitionTime":"2026-01-26T20:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.781186 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.781243 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.781260 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.781287 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.781305 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:44Z","lastTransitionTime":"2026-01-26T20:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.884292 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.884353 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.884371 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.884398 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.884415 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:44Z","lastTransitionTime":"2026-01-26T20:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.929178 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 22:24:20.045744445 +0000 UTC Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.930645 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.930735 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:44 crc kubenswrapper[4899]: E0126 20:56:44.930883 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.930988 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:44 crc kubenswrapper[4899]: E0126 20:56:44.931105 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:44 crc kubenswrapper[4899]: E0126 20:56:44.931385 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.987396 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.987447 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.987457 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.987473 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:44 crc kubenswrapper[4899]: I0126 20:56:44.987483 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:44Z","lastTransitionTime":"2026-01-26T20:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.090056 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.090092 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.090104 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.090122 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.090132 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:45Z","lastTransitionTime":"2026-01-26T20:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.192748 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.192777 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.192785 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.192797 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.192806 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:45Z","lastTransitionTime":"2026-01-26T20:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.295273 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.295300 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.295308 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.295320 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.295329 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:45Z","lastTransitionTime":"2026-01-26T20:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.397759 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.397809 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.397817 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.397830 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.397838 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:45Z","lastTransitionTime":"2026-01-26T20:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.499468 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.499545 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.499557 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.499575 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.499586 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:45Z","lastTransitionTime":"2026-01-26T20:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.602509 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.602548 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.602559 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.602578 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.602591 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:45Z","lastTransitionTime":"2026-01-26T20:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.642079 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.642143 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.642160 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.642183 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.642200 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:45Z","lastTransitionTime":"2026-01-26T20:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:45 crc kubenswrapper[4899]: E0126 20:56:45.663383 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.669202 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.669266 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.669284 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.669308 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.669325 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:45Z","lastTransitionTime":"2026-01-26T20:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:45 crc kubenswrapper[4899]: E0126 20:56:45.688277 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.693680 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.693782 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.693801 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.693826 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.693844 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:45Z","lastTransitionTime":"2026-01-26T20:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:45 crc kubenswrapper[4899]: E0126 20:56:45.714322 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.718463 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.718526 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.718549 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.718577 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.718600 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:45Z","lastTransitionTime":"2026-01-26T20:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:45 crc kubenswrapper[4899]: E0126 20:56:45.736790 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.741354 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.741413 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.741433 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.741457 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.741474 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:45Z","lastTransitionTime":"2026-01-26T20:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:45 crc kubenswrapper[4899]: E0126 20:56:45.756544 4899 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T20:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b67aa14a-3c73-44b6-a040-2aaa760f288c\\\",\\\"systemUUID\\\":\\\"ad899ebe-e8fa-491d-aaa1-e267ccbcc124\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T20:56:45Z is after 2025-08-24T17:21:41Z" Jan 26 20:56:45 crc kubenswrapper[4899]: E0126 20:56:45.756661 4899 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.758577 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.758600 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.758610 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.758625 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.758635 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:45Z","lastTransitionTime":"2026-01-26T20:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.861970 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.862063 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.862082 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.862106 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.862123 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:45Z","lastTransitionTime":"2026-01-26T20:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.929547 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 16:17:47.076160101 +0000 UTC Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.929624 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:45 crc kubenswrapper[4899]: E0126 20:56:45.929850 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.965314 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.965365 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.965384 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.965405 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:45 crc kubenswrapper[4899]: I0126 20:56:45.965422 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:45Z","lastTransitionTime":"2026-01-26T20:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.069381 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.069498 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.069518 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.069541 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.069558 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:46Z","lastTransitionTime":"2026-01-26T20:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.173405 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.173564 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.173595 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.173626 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.173652 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:46Z","lastTransitionTime":"2026-01-26T20:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.276113 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.276180 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.276198 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.276220 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.276236 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:46Z","lastTransitionTime":"2026-01-26T20:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.379767 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.379849 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.379874 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.379912 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.379969 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:46Z","lastTransitionTime":"2026-01-26T20:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.481758 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.481800 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.481811 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.481828 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.481861 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:46Z","lastTransitionTime":"2026-01-26T20:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.585304 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.585381 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.585401 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.585428 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.585447 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:46Z","lastTransitionTime":"2026-01-26T20:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.689093 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.689156 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.689172 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.689197 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.689218 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:46Z","lastTransitionTime":"2026-01-26T20:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.791828 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.791875 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.791886 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.791902 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.791913 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:46Z","lastTransitionTime":"2026-01-26T20:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.894603 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.894656 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.894670 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.894688 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.894703 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:46Z","lastTransitionTime":"2026-01-26T20:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.930352 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 12:29:49.256190005 +0000 UTC Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.930384 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.930485 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.930449 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:46 crc kubenswrapper[4899]: E0126 20:56:46.930724 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:46 crc kubenswrapper[4899]: E0126 20:56:46.930964 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:46 crc kubenswrapper[4899]: E0126 20:56:46.931059 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.997995 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.998064 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.998085 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.998107 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:46 crc kubenswrapper[4899]: I0126 20:56:46.998125 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:46Z","lastTransitionTime":"2026-01-26T20:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.101040 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.101095 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.101110 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.101131 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.101147 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:47Z","lastTransitionTime":"2026-01-26T20:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.204363 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.204402 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.204410 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.204426 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.204435 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:47Z","lastTransitionTime":"2026-01-26T20:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.307133 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.307177 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.307190 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.307205 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.307215 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:47Z","lastTransitionTime":"2026-01-26T20:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.410107 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.410150 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.410162 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.410179 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.410191 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:47Z","lastTransitionTime":"2026-01-26T20:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.512684 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.512776 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.512800 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.512820 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.512834 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:47Z","lastTransitionTime":"2026-01-26T20:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.616075 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.616112 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.616121 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.616135 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.616145 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:47Z","lastTransitionTime":"2026-01-26T20:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.718275 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.718311 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.718322 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.718337 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.718347 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:47Z","lastTransitionTime":"2026-01-26T20:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.820920 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.821004 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.821014 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.821029 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.821038 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:47Z","lastTransitionTime":"2026-01-26T20:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.922540 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.922582 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.922593 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.922607 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.922616 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:47Z","lastTransitionTime":"2026-01-26T20:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.930064 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:47 crc kubenswrapper[4899]: E0126 20:56:47.930178 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:47 crc kubenswrapper[4899]: I0126 20:56:47.931145 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:13:15.537444904 +0000 UTC Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.025105 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.025134 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.025144 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.025155 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.025164 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:48Z","lastTransitionTime":"2026-01-26T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.126816 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.126880 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.126899 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.126963 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.127155 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:48Z","lastTransitionTime":"2026-01-26T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.230231 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.230357 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.230377 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.230402 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.230420 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:48Z","lastTransitionTime":"2026-01-26T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.333809 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.333876 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.333902 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.333984 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.334012 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:48Z","lastTransitionTime":"2026-01-26T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.436740 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.437028 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.437111 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.437204 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.437283 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:48Z","lastTransitionTime":"2026-01-26T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.540453 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.540555 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.540566 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.540579 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.540588 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:48Z","lastTransitionTime":"2026-01-26T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.644582 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.644692 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.644719 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.644747 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.644772 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:48Z","lastTransitionTime":"2026-01-26T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.747718 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.747750 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.747758 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.747770 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.747779 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:48Z","lastTransitionTime":"2026-01-26T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.850191 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.850230 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.850275 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.850298 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.850306 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:48Z","lastTransitionTime":"2026-01-26T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.930121 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.930120 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:48 crc kubenswrapper[4899]: E0126 20:56:48.930256 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.930295 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:48 crc kubenswrapper[4899]: E0126 20:56:48.930324 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:48 crc kubenswrapper[4899]: E0126 20:56:48.930365 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.931514 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:48:25.602304291 +0000 UTC Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.952069 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.952115 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.952128 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.952145 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:48 crc kubenswrapper[4899]: I0126 20:56:48.952157 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:48Z","lastTransitionTime":"2026-01-26T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.054301 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.054379 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.054399 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.054419 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.054435 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:49Z","lastTransitionTime":"2026-01-26T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.156668 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.156709 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.156720 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.156735 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.156748 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:49Z","lastTransitionTime":"2026-01-26T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.260155 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.260217 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.260235 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.260257 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.260274 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:49Z","lastTransitionTime":"2026-01-26T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.363475 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.363514 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.363525 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.363540 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.363552 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:49Z","lastTransitionTime":"2026-01-26T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.466795 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.466848 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.466857 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.466872 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.466883 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:49Z","lastTransitionTime":"2026-01-26T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.569743 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.569837 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.569872 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.569921 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.569991 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:49Z","lastTransitionTime":"2026-01-26T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.672896 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.672989 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.673016 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.673044 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.673064 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:49Z","lastTransitionTime":"2026-01-26T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.775918 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.776059 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.776072 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.776088 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.776102 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:49Z","lastTransitionTime":"2026-01-26T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.879234 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.879315 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.879335 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.879360 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.879376 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:49Z","lastTransitionTime":"2026-01-26T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.930565 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:49 crc kubenswrapper[4899]: E0126 20:56:49.930772 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.931455 4899 scope.go:117] "RemoveContainer" containerID="d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e" Jan 26 20:56:49 crc kubenswrapper[4899]: E0126 20:56:49.931649 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.931685 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 07:04:30.821055012 +0000 UTC Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.982599 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.982656 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.982674 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.982697 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:49 crc kubenswrapper[4899]: I0126 20:56:49.982718 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:49Z","lastTransitionTime":"2026-01-26T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.085988 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.086056 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.086074 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.086100 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.086118 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:50Z","lastTransitionTime":"2026-01-26T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.188968 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.189041 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.189052 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.189075 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.189089 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:50Z","lastTransitionTime":"2026-01-26T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.291877 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.291960 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.291975 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.291996 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.292010 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:50Z","lastTransitionTime":"2026-01-26T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.395526 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.395574 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.395583 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.395605 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.395619 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:50Z","lastTransitionTime":"2026-01-26T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.478270 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs\") pod \"network-metrics-daemon-5s8xd\" (UID: \"88f49476-befa-4689-91cb-c0a8cc1def3d\") " pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:50 crc kubenswrapper[4899]: E0126 20:56:50.478624 4899 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 20:56:50 crc kubenswrapper[4899]: E0126 20:56:50.478815 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs podName:88f49476-befa-4689-91cb-c0a8cc1def3d nodeName:}" failed. No retries permitted until 2026-01-26 20:57:54.478768267 +0000 UTC m=+163.860356344 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs") pod "network-metrics-daemon-5s8xd" (UID: "88f49476-befa-4689-91cb-c0a8cc1def3d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.497456 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.497503 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.497511 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.497525 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.497534 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:50Z","lastTransitionTime":"2026-01-26T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.601424 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.601516 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.601537 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.601566 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.601590 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:50Z","lastTransitionTime":"2026-01-26T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.706223 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.706307 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.706332 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.706366 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.706392 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:50Z","lastTransitionTime":"2026-01-26T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.810644 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.810690 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.810701 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.810716 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.810728 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:50Z","lastTransitionTime":"2026-01-26T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.914476 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.914565 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.914590 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.914672 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.914701 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:50Z","lastTransitionTime":"2026-01-26T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.930017 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.930134 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.930180 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:50 crc kubenswrapper[4899]: E0126 20:56:50.930361 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:50 crc kubenswrapper[4899]: E0126 20:56:50.930612 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:50 crc kubenswrapper[4899]: E0126 20:56:50.930722 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:50 crc kubenswrapper[4899]: I0126 20:56:50.932368 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:50:56.48113243 +0000 UTC Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.007883 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-24sf9" podStartSLOduration=80.007854179 podStartE2EDuration="1m20.007854179s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:56:50.98575718 +0000 UTC m=+100.367345247" watchObservedRunningTime="2026-01-26 20:56:51.007854179 +0000 UTC m=+100.389442236" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.009882 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-bpfpb" podStartSLOduration=80.00986908 podStartE2EDuration="1m20.00986908s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:56:51.006517329 +0000 UTC m=+100.388105396" watchObservedRunningTime="2026-01-26 20:56:51.00986908 +0000 UTC m=+100.391457157" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.021369 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.021420 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.021431 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.021450 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.021462 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:51Z","lastTransitionTime":"2026-01-26T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.052968 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=79.052953035 podStartE2EDuration="1m19.052953035s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:56:51.05278698 +0000 UTC m=+100.434375027" watchObservedRunningTime="2026-01-26 20:56:51.052953035 +0000 UTC m=+100.434541072" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.095215 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-t8lnv" podStartSLOduration=80.095197774 podStartE2EDuration="1m20.095197774s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:56:51.095120772 +0000 UTC m=+100.476708849" watchObservedRunningTime="2026-01-26 20:56:51.095197774 +0000 UTC m=+100.476785811" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.123850 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vlmbq" podStartSLOduration=80.123825691 podStartE2EDuration="1m20.123825691s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:56:51.122883623 +0000 UTC m=+100.504471670" watchObservedRunningTime="2026-01-26 20:56:51.123825691 +0000 UTC m=+100.505413718" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.124223 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.124280 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.124292 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.124313 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.124326 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:51Z","lastTransitionTime":"2026-01-26T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.179597 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=51.179577359 podStartE2EDuration="51.179577359s" podCreationTimestamp="2026-01-26 20:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:56:51.164041779 +0000 UTC m=+100.545629836" watchObservedRunningTime="2026-01-26 20:56:51.179577359 +0000 UTC m=+100.561165396" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.197061 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.197023458 podStartE2EDuration="1m20.197023458s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:56:51.179665552 +0000 UTC m=+100.561253609" watchObservedRunningTime="2026-01-26 20:56:51.197023458 +0000 UTC m=+100.578611495" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.226486 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.226539 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.226551 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.226566 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.226579 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:51Z","lastTransitionTime":"2026-01-26T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.229187 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.229170491 podStartE2EDuration="16.229170491s" podCreationTimestamp="2026-01-26 20:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:56:51.227521411 +0000 UTC m=+100.609109448" watchObservedRunningTime="2026-01-26 20:56:51.229170491 +0000 UTC m=+100.610758528" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.256104 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podStartSLOduration=80.256084426 podStartE2EDuration="1m20.256084426s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:56:51.255585511 +0000 UTC m=+100.637173548" watchObservedRunningTime="2026-01-26 20:56:51.256084426 +0000 UTC m=+100.637672463" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.276458 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vl6k4" podStartSLOduration=79.276421122 podStartE2EDuration="1m19.276421122s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:56:51.275246417 +0000 UTC m=+100.656834454" watchObservedRunningTime="2026-01-26 20:56:51.276421122 +0000 UTC m=+100.658009159" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.333679 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.333731 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.333749 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.333767 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.333780 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:51Z","lastTransitionTime":"2026-01-26T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.436640 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.436732 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.436758 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.436796 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.436825 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:51Z","lastTransitionTime":"2026-01-26T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.539831 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.539875 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.539891 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.539916 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.539957 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:51Z","lastTransitionTime":"2026-01-26T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.643081 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.643125 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.643137 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.643153 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.643165 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:51Z","lastTransitionTime":"2026-01-26T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.746854 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.746921 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.746966 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.746994 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.747017 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:51Z","lastTransitionTime":"2026-01-26T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.851163 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.851238 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.851258 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.851288 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.851309 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:51Z","lastTransitionTime":"2026-01-26T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.930265 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:51 crc kubenswrapper[4899]: E0126 20:56:51.930563 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.932949 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 23:40:24.70082548 +0000 UTC Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.955660 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.955746 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.955765 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.955787 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:51 crc kubenswrapper[4899]: I0126 20:56:51.955827 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:51Z","lastTransitionTime":"2026-01-26T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.059810 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.059880 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.059898 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.059961 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.059992 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:52Z","lastTransitionTime":"2026-01-26T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.163347 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.163414 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.163433 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.163468 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.163495 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:52Z","lastTransitionTime":"2026-01-26T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.267042 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.267115 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.267136 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.267170 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.267191 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:52Z","lastTransitionTime":"2026-01-26T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.370213 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.370281 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.370300 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.370331 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.370364 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:52Z","lastTransitionTime":"2026-01-26T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.474008 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.474064 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.474074 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.474093 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.474103 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:52Z","lastTransitionTime":"2026-01-26T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.578018 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.578088 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.578106 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.578135 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.578155 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:52Z","lastTransitionTime":"2026-01-26T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.682200 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.682299 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.682318 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.682345 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.682365 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:52Z","lastTransitionTime":"2026-01-26T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.787881 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.787991 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.788025 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.788063 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.788087 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:52Z","lastTransitionTime":"2026-01-26T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.891099 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.891201 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.891222 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.891256 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.891277 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:52Z","lastTransitionTime":"2026-01-26T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.930846 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.930902 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.931050 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:52 crc kubenswrapper[4899]: E0126 20:56:52.931115 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:52 crc kubenswrapper[4899]: E0126 20:56:52.931375 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:52 crc kubenswrapper[4899]: E0126 20:56:52.931665 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.933831 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 09:00:01.86745303 +0000 UTC Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.994355 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.994426 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.994453 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.994487 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:52 crc kubenswrapper[4899]: I0126 20:56:52.994512 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:52Z","lastTransitionTime":"2026-01-26T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.097919 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.097971 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.097982 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.098052 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.098063 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:53Z","lastTransitionTime":"2026-01-26T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.201465 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.201512 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.201524 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.201541 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.201552 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:53Z","lastTransitionTime":"2026-01-26T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.304822 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.304885 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.304902 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.304965 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.304983 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:53Z","lastTransitionTime":"2026-01-26T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.408276 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.408334 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.408353 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.408379 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.408397 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:53Z","lastTransitionTime":"2026-01-26T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.511229 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.511293 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.511309 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.511335 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.511352 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:53Z","lastTransitionTime":"2026-01-26T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.614800 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.614899 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.614921 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.614995 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.615024 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:53Z","lastTransitionTime":"2026-01-26T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.719792 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.719887 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.719914 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.720014 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.720043 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:53Z","lastTransitionTime":"2026-01-26T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.824468 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.824543 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.824564 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.824596 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.824618 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:53Z","lastTransitionTime":"2026-01-26T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.928611 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.928681 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.928700 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.928728 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.928749 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:53Z","lastTransitionTime":"2026-01-26T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.929862 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:53 crc kubenswrapper[4899]: E0126 20:56:53.930014 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:53 crc kubenswrapper[4899]: I0126 20:56:53.934428 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 18:41:13.797357835 +0000 UTC Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.033074 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.033153 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.033173 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.033201 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.033221 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:54Z","lastTransitionTime":"2026-01-26T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.136444 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.136517 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.136528 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.136549 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.136566 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:54Z","lastTransitionTime":"2026-01-26T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.240461 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.240954 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.241120 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.241269 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.241406 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:54Z","lastTransitionTime":"2026-01-26T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.345227 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.345311 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.345336 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.345369 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.345392 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:54Z","lastTransitionTime":"2026-01-26T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.448273 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.448328 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.448349 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.448374 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.448392 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:54Z","lastTransitionTime":"2026-01-26T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.551558 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.551840 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.552019 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.552184 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.552389 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:54Z","lastTransitionTime":"2026-01-26T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.656066 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.656577 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.656749 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.656911 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.657093 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:54Z","lastTransitionTime":"2026-01-26T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.760484 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.760578 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.760606 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.760650 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.760678 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:54Z","lastTransitionTime":"2026-01-26T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.863887 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.863953 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.863966 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.863983 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.863998 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:54Z","lastTransitionTime":"2026-01-26T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.930652 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.930717 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.930777 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:54 crc kubenswrapper[4899]: E0126 20:56:54.930868 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:54 crc kubenswrapper[4899]: E0126 20:56:54.931011 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:54 crc kubenswrapper[4899]: E0126 20:56:54.931188 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.934850 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:17:43.050210955 +0000 UTC Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.967043 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.967107 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.967127 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.967154 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:54 crc kubenswrapper[4899]: I0126 20:56:54.967178 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:54Z","lastTransitionTime":"2026-01-26T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.071101 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.071170 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.071188 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.071217 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.071238 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:55Z","lastTransitionTime":"2026-01-26T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.174756 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.174834 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.174854 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.174889 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.174913 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:55Z","lastTransitionTime":"2026-01-26T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.279418 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.279486 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.279504 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.279535 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.279554 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:55Z","lastTransitionTime":"2026-01-26T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.383629 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.383705 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.383724 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.383781 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.383801 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:55Z","lastTransitionTime":"2026-01-26T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.486987 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.487101 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.487113 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.487135 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.487148 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:55Z","lastTransitionTime":"2026-01-26T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.590370 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.590452 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.590472 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.590502 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.590527 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:55Z","lastTransitionTime":"2026-01-26T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.693123 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.693183 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.693202 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.693227 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.693246 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:55Z","lastTransitionTime":"2026-01-26T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.797313 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.797387 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.797415 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.797444 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.797467 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:55Z","lastTransitionTime":"2026-01-26T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.900289 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.900355 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.900375 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.900407 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.900429 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:55Z","lastTransitionTime":"2026-01-26T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.930021 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:55 crc kubenswrapper[4899]: E0126 20:56:55.930214 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:55 crc kubenswrapper[4899]: I0126 20:56:55.935875 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:27:28.412855564 +0000 UTC Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.004230 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.004370 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.004387 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.004409 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.004422 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:56Z","lastTransitionTime":"2026-01-26T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.101163 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.101248 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.101275 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.101313 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.101339 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:56Z","lastTransitionTime":"2026-01-26T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.130532 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.130592 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.130605 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.130628 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.130643 4899 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T20:56:56Z","lastTransitionTime":"2026-01-26T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.169863 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp"] Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.170260 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.173626 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.174271 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.176104 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.176609 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.247840 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1271dff1-4390-42a8-b383-e71fce493bcd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ndxsp\" (UID: \"1271dff1-4390-42a8-b383-e71fce493bcd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.247997 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1271dff1-4390-42a8-b383-e71fce493bcd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ndxsp\" (UID: \"1271dff1-4390-42a8-b383-e71fce493bcd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.248069 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1271dff1-4390-42a8-b383-e71fce493bcd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ndxsp\" (UID: \"1271dff1-4390-42a8-b383-e71fce493bcd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.248122 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1271dff1-4390-42a8-b383-e71fce493bcd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ndxsp\" (UID: \"1271dff1-4390-42a8-b383-e71fce493bcd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.248201 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1271dff1-4390-42a8-b383-e71fce493bcd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ndxsp\" (UID: \"1271dff1-4390-42a8-b383-e71fce493bcd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.349264 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1271dff1-4390-42a8-b383-e71fce493bcd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ndxsp\" (UID: \"1271dff1-4390-42a8-b383-e71fce493bcd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.349341 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1271dff1-4390-42a8-b383-e71fce493bcd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ndxsp\" (UID: \"1271dff1-4390-42a8-b383-e71fce493bcd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.349409 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1271dff1-4390-42a8-b383-e71fce493bcd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ndxsp\" (UID: \"1271dff1-4390-42a8-b383-e71fce493bcd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.349455 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1271dff1-4390-42a8-b383-e71fce493bcd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ndxsp\" (UID: \"1271dff1-4390-42a8-b383-e71fce493bcd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.349529 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1271dff1-4390-42a8-b383-e71fce493bcd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ndxsp\" (UID: \"1271dff1-4390-42a8-b383-e71fce493bcd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.349611 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1271dff1-4390-42a8-b383-e71fce493bcd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ndxsp\" (UID: \"1271dff1-4390-42a8-b383-e71fce493bcd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.349641 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1271dff1-4390-42a8-b383-e71fce493bcd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ndxsp\" (UID: \"1271dff1-4390-42a8-b383-e71fce493bcd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.351833 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1271dff1-4390-42a8-b383-e71fce493bcd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ndxsp\" (UID: \"1271dff1-4390-42a8-b383-e71fce493bcd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.364275 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1271dff1-4390-42a8-b383-e71fce493bcd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ndxsp\" (UID: \"1271dff1-4390-42a8-b383-e71fce493bcd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.376402 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1271dff1-4390-42a8-b383-e71fce493bcd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ndxsp\" (UID: \"1271dff1-4390-42a8-b383-e71fce493bcd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.498391 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.930408 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.930474 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.930551 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:56 crc kubenswrapper[4899]: E0126 20:56:56.930618 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:56 crc kubenswrapper[4899]: E0126 20:56:56.930848 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:56 crc kubenswrapper[4899]: E0126 20:56:56.930964 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.936287 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:33:43.982863713 +0000 UTC Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.936339 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 20:56:56 crc kubenswrapper[4899]: I0126 20:56:56.946689 4899 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 20:56:57 crc kubenswrapper[4899]: I0126 20:56:57.519053 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" event={"ID":"1271dff1-4390-42a8-b383-e71fce493bcd","Type":"ContainerStarted","Data":"8ac513139af21cd483ea8dcc553fb49808297bb7d00dfb65861d4bec9a3f99c2"} Jan 26 20:56:57 crc kubenswrapper[4899]: I0126 20:56:57.519349 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" event={"ID":"1271dff1-4390-42a8-b383-e71fce493bcd","Type":"ContainerStarted","Data":"a378cdc5919f8f49b14e5b247ddb12590dc83b081e4c270163df96c46a76b04c"} Jan 26 20:56:57 crc kubenswrapper[4899]: I0126 20:56:57.538481 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ndxsp" podStartSLOduration=86.538446175 podStartE2EDuration="1m26.538446175s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:56:57.537457895 +0000 UTC m=+106.919045932" watchObservedRunningTime="2026-01-26 20:56:57.538446175 +0000 UTC m=+106.920034252" Jan 26 20:56:57 crc kubenswrapper[4899]: I0126 20:56:57.929781 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:57 crc kubenswrapper[4899]: E0126 20:56:57.929984 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:56:58 crc kubenswrapper[4899]: I0126 20:56:58.930488 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:56:58 crc kubenswrapper[4899]: I0126 20:56:58.930521 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:56:58 crc kubenswrapper[4899]: E0126 20:56:58.930613 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:56:58 crc kubenswrapper[4899]: I0126 20:56:58.930480 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:56:58 crc kubenswrapper[4899]: E0126 20:56:58.930728 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:56:58 crc kubenswrapper[4899]: E0126 20:56:58.930884 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:56:58 crc kubenswrapper[4899]: I0126 20:56:58.948712 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 20:56:59 crc kubenswrapper[4899]: I0126 20:56:59.929992 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:56:59 crc kubenswrapper[4899]: E0126 20:56:59.930139 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:57:00 crc kubenswrapper[4899]: I0126 20:57:00.930166 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:00 crc kubenswrapper[4899]: I0126 20:57:00.930170 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:00 crc kubenswrapper[4899]: I0126 20:57:00.930247 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:00 crc kubenswrapper[4899]: E0126 20:57:00.931770 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:57:00 crc kubenswrapper[4899]: E0126 20:57:00.932073 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:57:00 crc kubenswrapper[4899]: E0126 20:57:00.932210 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:57:00 crc kubenswrapper[4899]: I0126 20:57:00.957860 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.957838104 podStartE2EDuration="2.957838104s" podCreationTimestamp="2026-01-26 20:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:00.957520354 +0000 UTC m=+110.339108391" watchObservedRunningTime="2026-01-26 20:57:00.957838104 +0000 UTC m=+110.339426141" Jan 26 20:57:01 crc kubenswrapper[4899]: I0126 20:57:01.930529 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:01 crc kubenswrapper[4899]: E0126 20:57:01.930917 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:57:01 crc kubenswrapper[4899]: I0126 20:57:01.931040 4899 scope.go:117] "RemoveContainer" containerID="d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e" Jan 26 20:57:01 crc kubenswrapper[4899]: E0126 20:57:01.931166 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" Jan 26 20:57:02 crc kubenswrapper[4899]: I0126 20:57:02.929690 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:02 crc kubenswrapper[4899]: I0126 20:57:02.929691 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:02 crc kubenswrapper[4899]: E0126 20:57:02.930044 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:57:02 crc kubenswrapper[4899]: E0126 20:57:02.930121 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:57:02 crc kubenswrapper[4899]: I0126 20:57:02.930268 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:02 crc kubenswrapper[4899]: E0126 20:57:02.930342 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:57:03 crc kubenswrapper[4899]: I0126 20:57:03.929970 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:03 crc kubenswrapper[4899]: E0126 20:57:03.930194 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:57:04 crc kubenswrapper[4899]: I0126 20:57:04.930469 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:04 crc kubenswrapper[4899]: I0126 20:57:04.930545 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:04 crc kubenswrapper[4899]: I0126 20:57:04.930774 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:04 crc kubenswrapper[4899]: E0126 20:57:04.931031 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:57:04 crc kubenswrapper[4899]: E0126 20:57:04.931352 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:57:04 crc kubenswrapper[4899]: E0126 20:57:04.931435 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:57:05 crc kubenswrapper[4899]: I0126 20:57:05.544300 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-24sf9_595ae596-1477-4438-94f7-69400dc1ba20/kube-multus/1.log" Jan 26 20:57:05 crc kubenswrapper[4899]: I0126 20:57:05.545104 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-24sf9_595ae596-1477-4438-94f7-69400dc1ba20/kube-multus/0.log" Jan 26 20:57:05 crc kubenswrapper[4899]: I0126 20:57:05.545179 4899 generic.go:334] "Generic (PLEG): container finished" podID="595ae596-1477-4438-94f7-69400dc1ba20" containerID="a67d24e714a631cf429c1b815c09cd81ec904b03b09d1bcab788de52458822f5" exitCode=1 Jan 26 20:57:05 crc kubenswrapper[4899]: I0126 20:57:05.545224 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-24sf9" event={"ID":"595ae596-1477-4438-94f7-69400dc1ba20","Type":"ContainerDied","Data":"a67d24e714a631cf429c1b815c09cd81ec904b03b09d1bcab788de52458822f5"} Jan 26 20:57:05 crc kubenswrapper[4899]: I0126 20:57:05.545268 4899 scope.go:117] "RemoveContainer" containerID="04ce7e6f7bffd979adf493aad4fd94d0858156a0eb2faf5d65658b6edf3422d5" Jan 26 20:57:05 crc kubenswrapper[4899]: I0126 20:57:05.545890 4899 scope.go:117] "RemoveContainer" containerID="a67d24e714a631cf429c1b815c09cd81ec904b03b09d1bcab788de52458822f5" Jan 26 20:57:05 crc kubenswrapper[4899]: E0126 20:57:05.546182 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-24sf9_openshift-multus(595ae596-1477-4438-94f7-69400dc1ba20)\"" pod="openshift-multus/multus-24sf9" podUID="595ae596-1477-4438-94f7-69400dc1ba20" Jan 26 20:57:05 crc kubenswrapper[4899]: I0126 20:57:05.929661 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:05 crc kubenswrapper[4899]: E0126 20:57:05.929780 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:57:06 crc kubenswrapper[4899]: I0126 20:57:06.552266 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-24sf9_595ae596-1477-4438-94f7-69400dc1ba20/kube-multus/1.log" Jan 26 20:57:06 crc kubenswrapper[4899]: I0126 20:57:06.930093 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:06 crc kubenswrapper[4899]: I0126 20:57:06.930226 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:06 crc kubenswrapper[4899]: E0126 20:57:06.930393 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:57:06 crc kubenswrapper[4899]: I0126 20:57:06.930446 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:06 crc kubenswrapper[4899]: E0126 20:57:06.930622 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:57:06 crc kubenswrapper[4899]: E0126 20:57:06.930769 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:57:07 crc kubenswrapper[4899]: I0126 20:57:07.930148 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:07 crc kubenswrapper[4899]: E0126 20:57:07.930346 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:57:08 crc kubenswrapper[4899]: I0126 20:57:08.930667 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:08 crc kubenswrapper[4899]: I0126 20:57:08.930697 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:08 crc kubenswrapper[4899]: I0126 20:57:08.930821 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:08 crc kubenswrapper[4899]: E0126 20:57:08.931037 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:57:08 crc kubenswrapper[4899]: E0126 20:57:08.931126 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:57:08 crc kubenswrapper[4899]: E0126 20:57:08.931757 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:57:09 crc kubenswrapper[4899]: I0126 20:57:09.929592 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:09 crc kubenswrapper[4899]: E0126 20:57:09.929734 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:57:10 crc kubenswrapper[4899]: E0126 20:57:10.885263 4899 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 20:57:10 crc kubenswrapper[4899]: I0126 20:57:10.930299 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:10 crc kubenswrapper[4899]: E0126 20:57:10.932228 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:57:10 crc kubenswrapper[4899]: I0126 20:57:10.932280 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:10 crc kubenswrapper[4899]: I0126 20:57:10.932256 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:10 crc kubenswrapper[4899]: E0126 20:57:10.932386 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:57:10 crc kubenswrapper[4899]: E0126 20:57:10.932440 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:57:11 crc kubenswrapper[4899]: E0126 20:57:11.082479 4899 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 20:57:11 crc kubenswrapper[4899]: I0126 20:57:11.930573 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:11 crc kubenswrapper[4899]: E0126 20:57:11.930702 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:57:12 crc kubenswrapper[4899]: I0126 20:57:12.930034 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:12 crc kubenswrapper[4899]: I0126 20:57:12.930167 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:12 crc kubenswrapper[4899]: I0126 20:57:12.930071 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:12 crc kubenswrapper[4899]: E0126 20:57:12.930217 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:57:12 crc kubenswrapper[4899]: E0126 20:57:12.930357 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:57:12 crc kubenswrapper[4899]: E0126 20:57:12.930443 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:57:13 crc kubenswrapper[4899]: I0126 20:57:13.929620 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:13 crc kubenswrapper[4899]: E0126 20:57:13.929801 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:57:13 crc kubenswrapper[4899]: I0126 20:57:13.938680 4899 scope.go:117] "RemoveContainer" containerID="d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e" Jan 26 20:57:13 crc kubenswrapper[4899]: E0126 20:57:13.940791 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrvcx_openshift-ovn-kubernetes(30d7d720-d73a-488d-b6ec-755f5da1888c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" Jan 26 20:57:14 crc kubenswrapper[4899]: I0126 20:57:14.929877 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:14 crc kubenswrapper[4899]: E0126 20:57:14.930159 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:57:14 crc kubenswrapper[4899]: I0126 20:57:14.929877 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:14 crc kubenswrapper[4899]: I0126 20:57:14.929911 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:14 crc kubenswrapper[4899]: E0126 20:57:14.930749 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:57:14 crc kubenswrapper[4899]: E0126 20:57:14.930974 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:57:15 crc kubenswrapper[4899]: I0126 20:57:15.929698 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:15 crc kubenswrapper[4899]: E0126 20:57:15.929916 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:57:16 crc kubenswrapper[4899]: E0126 20:57:16.083777 4899 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 20:57:16 crc kubenswrapper[4899]: I0126 20:57:16.929755 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:16 crc kubenswrapper[4899]: I0126 20:57:16.929901 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:16 crc kubenswrapper[4899]: E0126 20:57:16.929913 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:57:16 crc kubenswrapper[4899]: I0126 20:57:16.929969 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:16 crc kubenswrapper[4899]: E0126 20:57:16.930291 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:57:16 crc kubenswrapper[4899]: E0126 20:57:16.930363 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:57:17 crc kubenswrapper[4899]: I0126 20:57:17.929664 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:17 crc kubenswrapper[4899]: E0126 20:57:17.929865 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:57:18 crc kubenswrapper[4899]: I0126 20:57:18.930478 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:18 crc kubenswrapper[4899]: I0126 20:57:18.930515 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:18 crc kubenswrapper[4899]: E0126 20:57:18.930680 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:57:18 crc kubenswrapper[4899]: I0126 20:57:18.930710 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:18 crc kubenswrapper[4899]: E0126 20:57:18.930853 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:57:18 crc kubenswrapper[4899]: E0126 20:57:18.931028 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:57:19 crc kubenswrapper[4899]: I0126 20:57:19.930762 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:19 crc kubenswrapper[4899]: E0126 20:57:19.931199 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:57:19 crc kubenswrapper[4899]: I0126 20:57:19.931496 4899 scope.go:117] "RemoveContainer" containerID="a67d24e714a631cf429c1b815c09cd81ec904b03b09d1bcab788de52458822f5" Jan 26 20:57:20 crc kubenswrapper[4899]: I0126 20:57:20.608985 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-24sf9_595ae596-1477-4438-94f7-69400dc1ba20/kube-multus/1.log" Jan 26 20:57:20 crc kubenswrapper[4899]: I0126 20:57:20.609556 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-24sf9" event={"ID":"595ae596-1477-4438-94f7-69400dc1ba20","Type":"ContainerStarted","Data":"6c4d7f7a8e96fc84272e695b643dbe28e96ef9580bd73c64ac8ab76dd615e8cf"} Jan 26 20:57:20 crc kubenswrapper[4899]: I0126 20:57:20.930333 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:20 crc kubenswrapper[4899]: I0126 20:57:20.930405 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:20 crc kubenswrapper[4899]: I0126 20:57:20.930407 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:20 crc kubenswrapper[4899]: E0126 20:57:20.931382 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:57:20 crc kubenswrapper[4899]: E0126 20:57:20.931480 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:57:20 crc kubenswrapper[4899]: E0126 20:57:20.931595 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:57:21 crc kubenswrapper[4899]: E0126 20:57:21.087353 4899 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 20:57:21 crc kubenswrapper[4899]: I0126 20:57:21.930329 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:21 crc kubenswrapper[4899]: E0126 20:57:21.930528 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:57:22 crc kubenswrapper[4899]: I0126 20:57:22.930644 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:22 crc kubenswrapper[4899]: I0126 20:57:22.930755 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:22 crc kubenswrapper[4899]: E0126 20:57:22.930827 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:57:22 crc kubenswrapper[4899]: I0126 20:57:22.930910 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:22 crc kubenswrapper[4899]: E0126 20:57:22.931051 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:57:22 crc kubenswrapper[4899]: E0126 20:57:22.931158 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:57:23 crc kubenswrapper[4899]: I0126 20:57:23.929966 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:23 crc kubenswrapper[4899]: E0126 20:57:23.930172 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:57:24 crc kubenswrapper[4899]: I0126 20:57:24.930237 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:24 crc kubenswrapper[4899]: I0126 20:57:24.930343 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:24 crc kubenswrapper[4899]: E0126 20:57:24.930428 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:57:24 crc kubenswrapper[4899]: I0126 20:57:24.930451 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:24 crc kubenswrapper[4899]: E0126 20:57:24.930595 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:57:24 crc kubenswrapper[4899]: E0126 20:57:24.930726 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:57:25 crc kubenswrapper[4899]: I0126 20:57:25.930121 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:25 crc kubenswrapper[4899]: E0126 20:57:25.930330 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:57:26 crc kubenswrapper[4899]: E0126 20:57:26.088897 4899 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 20:57:26 crc kubenswrapper[4899]: I0126 20:57:26.930592 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:26 crc kubenswrapper[4899]: I0126 20:57:26.930634 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:26 crc kubenswrapper[4899]: I0126 20:57:26.930588 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:26 crc kubenswrapper[4899]: E0126 20:57:26.930726 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:57:26 crc kubenswrapper[4899]: E0126 20:57:26.930945 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:57:26 crc kubenswrapper[4899]: E0126 20:57:26.931340 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:57:26 crc kubenswrapper[4899]: I0126 20:57:26.931621 4899 scope.go:117] "RemoveContainer" containerID="d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e" Jan 26 20:57:27 crc kubenswrapper[4899]: I0126 20:57:27.635310 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovnkube-controller/3.log" Jan 26 20:57:27 crc kubenswrapper[4899]: I0126 20:57:27.638984 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerStarted","Data":"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417"} Jan 26 20:57:27 crc kubenswrapper[4899]: I0126 20:57:27.639412 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:57:27 crc kubenswrapper[4899]: I0126 20:57:27.673173 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podStartSLOduration=116.67314421 podStartE2EDuration="1m56.67314421s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:27.670151617 +0000 UTC m=+137.051739724" watchObservedRunningTime="2026-01-26 20:57:27.67314421 +0000 UTC m=+137.054732287" Jan 26 20:57:27 crc kubenswrapper[4899]: I0126 20:57:27.924006 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5s8xd"] Jan 26 20:57:27 crc kubenswrapper[4899]: I0126 20:57:27.924142 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:27 crc kubenswrapper[4899]: E0126 20:57:27.924266 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:57:28 crc kubenswrapper[4899]: I0126 20:57:28.930104 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:28 crc kubenswrapper[4899]: E0126 20:57:28.930211 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:57:28 crc kubenswrapper[4899]: I0126 20:57:28.930385 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:28 crc kubenswrapper[4899]: E0126 20:57:28.930438 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:57:28 crc kubenswrapper[4899]: I0126 20:57:28.930519 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:28 crc kubenswrapper[4899]: E0126 20:57:28.930622 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:57:29 crc kubenswrapper[4899]: I0126 20:57:29.930437 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:29 crc kubenswrapper[4899]: E0126 20:57:29.930641 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5s8xd" podUID="88f49476-befa-4689-91cb-c0a8cc1def3d" Jan 26 20:57:30 crc kubenswrapper[4899]: I0126 20:57:30.930699 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:30 crc kubenswrapper[4899]: I0126 20:57:30.930761 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:30 crc kubenswrapper[4899]: I0126 20:57:30.930852 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:30 crc kubenswrapper[4899]: E0126 20:57:30.932521 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 20:57:30 crc kubenswrapper[4899]: E0126 20:57:30.932801 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 20:57:30 crc kubenswrapper[4899]: E0126 20:57:30.932913 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 20:57:31 crc kubenswrapper[4899]: I0126 20:57:31.930113 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:31 crc kubenswrapper[4899]: I0126 20:57:31.932458 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 20:57:31 crc kubenswrapper[4899]: I0126 20:57:31.935602 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 20:57:32 crc kubenswrapper[4899]: I0126 20:57:32.930483 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:32 crc kubenswrapper[4899]: I0126 20:57:32.930496 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:32 crc kubenswrapper[4899]: I0126 20:57:32.930595 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:32 crc kubenswrapper[4899]: I0126 20:57:32.933642 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 20:57:32 crc kubenswrapper[4899]: I0126 20:57:32.933705 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 20:57:32 crc kubenswrapper[4899]: I0126 20:57:32.933853 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 20:57:32 crc kubenswrapper[4899]: I0126 20:57:32.934376 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.446453 4899 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.486710 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jtwht"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.487305 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.490022 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.490537 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dq8kh"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.491292 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.491978 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.491982 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.494575 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.492266 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.493235 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.495969 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.509301 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.509672 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.509738 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/676ef23d-20dd-4ccb-b846-b83c71305d24-serving-cert\") pod \"controller-manager-879f6c89f-dq8kh\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.509777 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e74d40cc-592a-4c5b-9f56-70f5657fc787-encryption-config\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.509797 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpvxw\" (UniqueName: \"kubernetes.io/projected/e74d40cc-592a-4c5b-9f56-70f5657fc787-kube-api-access-gpvxw\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.509810 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e74d40cc-592a-4c5b-9f56-70f5657fc787-etcd-serving-ca\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.509835 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e74d40cc-592a-4c5b-9f56-70f5657fc787-audit\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.509856 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e74d40cc-592a-4c5b-9f56-70f5657fc787-node-pullsecrets\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.509871 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e74d40cc-592a-4c5b-9f56-70f5657fc787-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.509885 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e74d40cc-592a-4c5b-9f56-70f5657fc787-audit-dir\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.509909 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2vdp\" (UniqueName: \"kubernetes.io/projected/676ef23d-20dd-4ccb-b846-b83c71305d24-kube-api-access-d2vdp\") pod \"controller-manager-879f6c89f-dq8kh\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.509943 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e74d40cc-592a-4c5b-9f56-70f5657fc787-config\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.509962 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dq8kh\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.509987 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-config\") pod \"controller-manager-879f6c89f-dq8kh\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.510001 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e74d40cc-592a-4c5b-9f56-70f5657fc787-serving-cert\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.510016 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e74d40cc-592a-4c5b-9f56-70f5657fc787-etcd-client\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.510034 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e74d40cc-592a-4c5b-9f56-70f5657fc787-image-import-ca\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.510047 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-client-ca\") pod \"controller-manager-879f6c89f-dq8kh\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.510099 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.510245 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.510324 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.510357 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.510410 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.510585 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.510694 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.511460 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.520153 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.520504 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.522892 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.523385 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.523732 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rq8lx"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.524088 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.524128 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7skd"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.524637 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.525261 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jtwht"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.525291 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.525334 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.525391 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.525493 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.526038 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7skd" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.526542 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.528378 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.537162 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.537508 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.537705 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.537777 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.537990 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.538072 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.538132 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.538170 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.538265 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.538295 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.538439 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.538573 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.538708 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.538754 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.539024 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.539242 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.539370 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.541202 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-jsrd8"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.542712 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.543175 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.544882 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pv9sc"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.545258 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.545749 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.546625 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jdxz6"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.546942 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.547353 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4v7dp"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.547648 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-9tfhr"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.547722 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.547762 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pv9sc" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.548111 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jdxz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.548534 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.548559 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zt55n"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.548824 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-8vzmg"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.549104 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dq8kh"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.549118 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.549129 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.549288 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.560017 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.569118 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vl6d2"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.570380 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.570653 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8vzmg" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.583334 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.583560 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.583657 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.583734 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.584339 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.584673 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.585338 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.587077 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.587614 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.588279 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.588396 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.588478 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.588664 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.588739 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.588812 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.588917 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.589119 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.589230 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.589336 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.589440 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.589538 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.589593 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.589684 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.589736 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.589769 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.590047 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.591898 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.592643 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.595465 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.595616 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.595865 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.597012 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.597796 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.598313 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.597811 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.598744 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.598780 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.604937 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.605121 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.605155 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.605305 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.605659 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.605878 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606062 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606189 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606214 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606287 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606390 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606394 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606451 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606542 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606635 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606657 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606687 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606737 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606690 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606772 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606777 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606893 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606897 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.606997 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.608017 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.608413 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.608580 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.610552 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e74d40cc-592a-4c5b-9f56-70f5657fc787-audit\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.610578 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e74d40cc-592a-4c5b-9f56-70f5657fc787-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.610598 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e74d40cc-592a-4c5b-9f56-70f5657fc787-node-pullsecrets\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.610613 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e74d40cc-592a-4c5b-9f56-70f5657fc787-audit-dir\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.610630 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2vdp\" (UniqueName: \"kubernetes.io/projected/676ef23d-20dd-4ccb-b846-b83c71305d24-kube-api-access-d2vdp\") pod \"controller-manager-879f6c89f-dq8kh\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.610653 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e74d40cc-592a-4c5b-9f56-70f5657fc787-config\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.610669 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dq8kh\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.610684 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-config\") pod \"controller-manager-879f6c89f-dq8kh\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.610705 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e74d40cc-592a-4c5b-9f56-70f5657fc787-serving-cert\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.610718 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e74d40cc-592a-4c5b-9f56-70f5657fc787-etcd-client\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.610753 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-client-ca\") pod \"controller-manager-879f6c89f-dq8kh\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.610767 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e74d40cc-592a-4c5b-9f56-70f5657fc787-image-import-ca\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.610794 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/676ef23d-20dd-4ccb-b846-b83c71305d24-serving-cert\") pod \"controller-manager-879f6c89f-dq8kh\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.610815 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e74d40cc-592a-4c5b-9f56-70f5657fc787-encryption-config\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.610831 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpvxw\" (UniqueName: \"kubernetes.io/projected/e74d40cc-592a-4c5b-9f56-70f5657fc787-kube-api-access-gpvxw\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.610850 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e74d40cc-592a-4c5b-9f56-70f5657fc787-etcd-serving-ca\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.611441 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e74d40cc-592a-4c5b-9f56-70f5657fc787-etcd-serving-ca\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.612781 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-config\") pod \"controller-manager-879f6c89f-dq8kh\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.613132 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.613196 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e74d40cc-592a-4c5b-9f56-70f5657fc787-audit\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.613652 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7jwnb"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.614188 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e74d40cc-592a-4c5b-9f56-70f5657fc787-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.614235 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e74d40cc-592a-4c5b-9f56-70f5657fc787-node-pullsecrets\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.614259 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e74d40cc-592a-4c5b-9f56-70f5657fc787-audit-dir\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.614721 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e74d40cc-592a-4c5b-9f56-70f5657fc787-config\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.615214 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e74d40cc-592a-4c5b-9f56-70f5657fc787-image-import-ca\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.625756 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e74d40cc-592a-4c5b-9f56-70f5657fc787-serving-cert\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.625907 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.626278 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.627102 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.627403 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.627464 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.631981 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.632594 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.632833 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.632977 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.635298 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.635541 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.636971 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e74d40cc-592a-4c5b-9f56-70f5657fc787-encryption-config\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.636992 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e74d40cc-592a-4c5b-9f56-70f5657fc787-etcd-client\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.640840 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.648771 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-plssq"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.649602 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.651535 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dq8kh\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.652186 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-client-ca\") pod \"controller-manager-879f6c89f-dq8kh\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.652304 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5kjdm"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.654321 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.656134 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plssq" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.656223 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7jwnb" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.656587 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.665328 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.666041 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/676ef23d-20dd-4ccb-b846-b83c71305d24-serving-cert\") pod \"controller-manager-879f6c89f-dq8kh\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.666222 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5kjdm" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.666724 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.667015 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cqz7t"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.667288 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.667641 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.667793 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.667914 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.668180 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cqz7t" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.669071 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.669155 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.669762 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.670541 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.670701 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.671693 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.672139 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.672543 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.673222 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.673293 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.673811 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl68z"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.674965 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.675315 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.678994 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lxbfv"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.679810 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.680010 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jsrd8"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.680944 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.683086 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jdxz6"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.683232 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.684351 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-rjk22"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.684878 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rjk22" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.685555 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-spqzr"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.685915 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-spqzr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.686292 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8vzmg"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.687314 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5kjdm"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.688308 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.688533 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.689294 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7skd"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.690309 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.691279 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.692287 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.693244 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7jwnb"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.694219 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pv9sc"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.695168 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4v7dp"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.696151 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.697132 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vl6d2"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.698131 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rq8lx"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.699128 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zt55n"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.700044 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.701007 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.701953 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cqz7t"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.702903 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.704094 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-plssq"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.704891 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.705882 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.707009 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.707840 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl68z"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.708657 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.708863 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-spqzr"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.710794 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bnw9c"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712243 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-serving-cert\") pod \"route-controller-manager-6576b87f9c-tr24d\" (UID: \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712334 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvlpw\" (UniqueName: \"kubernetes.io/projected/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-kube-api-access-vvlpw\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712362 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dbtt\" (UniqueName: \"kubernetes.io/projected/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-kube-api-access-5dbtt\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712379 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712397 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cfa0ed4a-5d9c-4b54-b733-9a133db47307-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r8lh9\" (UID: \"cfa0ed4a-5d9c-4b54-b733-9a133db47307\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712414 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-audit-policies\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712509 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/997e7432-e74d-4f39-accd-a85b98f21978-stats-auth\") pod \"router-default-5444994796-9tfhr\" (UID: \"997e7432-e74d-4f39-accd-a85b98f21978\") " pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712558 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712580 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ll95\" (UniqueName: \"kubernetes.io/projected/22530841-f07a-4811-bbdf-9964a1818e16-kube-api-access-5ll95\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712608 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cafb1f49-e8d2-42bb-b00c-c069f86db12c-metrics-tls\") pod \"dns-operator-744455d44c-pv9sc\" (UID: \"cafb1f49-e8d2-42bb-b00c-c069f86db12c\") " pod="openshift-dns-operator/dns-operator-744455d44c-pv9sc" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712631 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-config\") pod \"route-controller-manager-6576b87f9c-tr24d\" (UID: \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712646 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/961c22be-2ec9-4136-a19a-191ca2eab35b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-96lmw\" (UID: \"961c22be-2ec9-4136-a19a-191ca2eab35b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712717 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-audit-dir\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712733 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712752 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knqpf\" (UniqueName: \"kubernetes.io/projected/cafb1f49-e8d2-42bb-b00c-c069f86db12c-kube-api-access-knqpf\") pod \"dns-operator-744455d44c-pv9sc\" (UID: \"cafb1f49-e8d2-42bb-b00c-c069f86db12c\") " pod="openshift-dns-operator/dns-operator-744455d44c-pv9sc" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712768 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997e7432-e74d-4f39-accd-a85b98f21978-metrics-certs\") pod \"router-default-5444994796-9tfhr\" (UID: \"997e7432-e74d-4f39-accd-a85b98f21978\") " pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712777 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712822 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lxbfv"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712783 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.712918 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.713022 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5rkc\" (UniqueName: \"kubernetes.io/projected/0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48-kube-api-access-z5rkc\") pod \"console-operator-58897d9998-jdxz6\" (UID: \"0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48\") " pod="openshift-console-operator/console-operator-58897d9998-jdxz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.713763 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8zhg\" (UniqueName: \"kubernetes.io/projected/961c22be-2ec9-4136-a19a-191ca2eab35b-kube-api-access-l8zhg\") pod \"ingress-operator-5b745b69d9-96lmw\" (UID: \"961c22be-2ec9-4136-a19a-191ca2eab35b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.713798 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.713846 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac40e6bb-0637-4926-af5e-3ab0f13e0449-config\") pod \"machine-approver-56656f9798-9dm9b\" (UID: \"ac40e6bb-0637-4926-af5e-3ab0f13e0449\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.714001 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/961c22be-2ec9-4136-a19a-191ca2eab35b-trusted-ca\") pod \"ingress-operator-5b745b69d9-96lmw\" (UID: \"961c22be-2ec9-4136-a19a-191ca2eab35b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.714029 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/997e7432-e74d-4f39-accd-a85b98f21978-service-ca-bundle\") pod \"router-default-5444994796-9tfhr\" (UID: \"997e7432-e74d-4f39-accd-a85b98f21978\") " pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.715505 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.715628 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fx9c\" (UniqueName: \"kubernetes.io/projected/cfa0ed4a-5d9c-4b54-b733-9a133db47307-kube-api-access-9fx9c\") pod \"openshift-config-operator-7777fb866f-r8lh9\" (UID: \"cfa0ed4a-5d9c-4b54-b733-9a133db47307\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.715667 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/961c22be-2ec9-4136-a19a-191ca2eab35b-metrics-tls\") pod \"ingress-operator-5b745b69d9-96lmw\" (UID: \"961c22be-2ec9-4136-a19a-191ca2eab35b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.715765 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ac40e6bb-0637-4926-af5e-3ab0f13e0449-machine-approver-tls\") pod \"machine-approver-56656f9798-9dm9b\" (UID: \"ac40e6bb-0637-4926-af5e-3ab0f13e0449\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.715878 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.715950 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.715980 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.716359 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48-config\") pod \"console-operator-58897d9998-jdxz6\" (UID: \"0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48\") " pod="openshift-console-operator/console-operator-58897d9998-jdxz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717088 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-service-ca\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717171 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22530841-f07a-4811-bbdf-9964a1818e16-audit-dir\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717213 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-console-serving-cert\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717253 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717285 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ac40e6bb-0637-4926-af5e-3ab0f13e0449-auth-proxy-config\") pod \"machine-approver-56656f9798-9dm9b\" (UID: \"ac40e6bb-0637-4926-af5e-3ab0f13e0449\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717308 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8v9j\" (UniqueName: \"kubernetes.io/projected/c7e4ee5d-670e-40e6-9102-e9766a492381-kube-api-access-z8v9j\") pod \"cluster-samples-operator-665b6dd947-j7skd\" (UID: \"c7e4ee5d-670e-40e6-9102-e9766a492381\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7skd" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717332 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-serving-cert\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717367 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-etcd-client\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717383 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717421 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-audit-policies\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717438 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-trusted-ca-bundle\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717456 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfa0ed4a-5d9c-4b54-b733-9a133db47307-serving-cert\") pod \"openshift-config-operator-7777fb866f-r8lh9\" (UID: \"cfa0ed4a-5d9c-4b54-b733-9a133db47307\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717476 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717556 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7e4ee5d-670e-40e6-9102-e9766a492381-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j7skd\" (UID: \"c7e4ee5d-670e-40e6-9102-e9766a492381\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7skd" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717576 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-client-ca\") pod \"route-controller-manager-6576b87f9c-tr24d\" (UID: \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717598 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb5q4\" (UniqueName: \"kubernetes.io/projected/997e7432-e74d-4f39-accd-a85b98f21978-kube-api-access-hb5q4\") pod \"router-default-5444994796-9tfhr\" (UID: \"997e7432-e74d-4f39-accd-a85b98f21978\") " pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717620 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-console-oauth-config\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717641 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkwx9\" (UniqueName: \"kubernetes.io/projected/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-kube-api-access-dkwx9\") pod \"route-controller-manager-6576b87f9c-tr24d\" (UID: \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717737 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717759 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717799 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-console-config\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717821 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pq54\" (UniqueName: \"kubernetes.io/projected/ac40e6bb-0637-4926-af5e-3ab0f13e0449-kube-api-access-7pq54\") pod \"machine-approver-56656f9798-9dm9b\" (UID: \"ac40e6bb-0637-4926-af5e-3ab0f13e0449\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717838 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-encryption-config\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717860 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/997e7432-e74d-4f39-accd-a85b98f21978-default-certificate\") pod \"router-default-5444994796-9tfhr\" (UID: \"997e7432-e74d-4f39-accd-a85b98f21978\") " pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717880 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48-serving-cert\") pod \"console-operator-58897d9998-jdxz6\" (UID: \"0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48\") " pod="openshift-console-operator/console-operator-58897d9998-jdxz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717901 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-oauth-serving-cert\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.717919 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48-trusted-ca\") pod \"console-operator-58897d9998-jdxz6\" (UID: \"0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48\") " pod="openshift-console-operator/console-operator-58897d9998-jdxz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.720425 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bnw9c"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.722528 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.725724 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.728372 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.738777 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-nhxcw"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.739545 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nhxcw" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.743554 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nhxcw"] Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.748943 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.768994 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.789693 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.808790 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.819045 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-service-ca\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.819169 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22530841-f07a-4811-bbdf-9964a1818e16-audit-dir\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.819261 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-console-serving-cert\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.819345 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.819426 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ac40e6bb-0637-4926-af5e-3ab0f13e0449-auth-proxy-config\") pod \"machine-approver-56656f9798-9dm9b\" (UID: \"ac40e6bb-0637-4926-af5e-3ab0f13e0449\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.819563 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8v9j\" (UniqueName: \"kubernetes.io/projected/c7e4ee5d-670e-40e6-9102-e9766a492381-kube-api-access-z8v9j\") pod \"cluster-samples-operator-665b6dd947-j7skd\" (UID: \"c7e4ee5d-670e-40e6-9102-e9766a492381\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7skd" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.819258 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22530841-f07a-4811-bbdf-9964a1818e16-audit-dir\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.820015 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-serving-cert\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.820483 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-etcd-client\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.820582 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.820655 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-audit-policies\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.820809 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7e4ee5d-670e-40e6-9102-e9766a492381-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j7skd\" (UID: \"c7e4ee5d-670e-40e6-9102-e9766a492381\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7skd" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.820256 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-service-ca\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.820332 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.820201 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ac40e6bb-0637-4926-af5e-3ab0f13e0449-auth-proxy-config\") pod \"machine-approver-56656f9798-9dm9b\" (UID: \"ac40e6bb-0637-4926-af5e-3ab0f13e0449\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.820890 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-trusted-ca-bundle\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821004 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfa0ed4a-5d9c-4b54-b733-9a133db47307-serving-cert\") pod \"openshift-config-operator-7777fb866f-r8lh9\" (UID: \"cfa0ed4a-5d9c-4b54-b733-9a133db47307\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821029 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821050 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb5q4\" (UniqueName: \"kubernetes.io/projected/997e7432-e74d-4f39-accd-a85b98f21978-kube-api-access-hb5q4\") pod \"router-default-5444994796-9tfhr\" (UID: \"997e7432-e74d-4f39-accd-a85b98f21978\") " pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821071 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-client-ca\") pod \"route-controller-manager-6576b87f9c-tr24d\" (UID: \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821087 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821103 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821124 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-console-oauth-config\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821142 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkwx9\" (UniqueName: \"kubernetes.io/projected/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-kube-api-access-dkwx9\") pod \"route-controller-manager-6576b87f9c-tr24d\" (UID: \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821162 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-console-config\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821181 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pq54\" (UniqueName: \"kubernetes.io/projected/ac40e6bb-0637-4926-af5e-3ab0f13e0449-kube-api-access-7pq54\") pod \"machine-approver-56656f9798-9dm9b\" (UID: \"ac40e6bb-0637-4926-af5e-3ab0f13e0449\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821201 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-encryption-config\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821224 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/997e7432-e74d-4f39-accd-a85b98f21978-default-certificate\") pod \"router-default-5444994796-9tfhr\" (UID: \"997e7432-e74d-4f39-accd-a85b98f21978\") " pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821241 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48-serving-cert\") pod \"console-operator-58897d9998-jdxz6\" (UID: \"0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48\") " pod="openshift-console-operator/console-operator-58897d9998-jdxz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821271 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-oauth-serving-cert\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821290 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48-trusted-ca\") pod \"console-operator-58897d9998-jdxz6\" (UID: \"0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48\") " pod="openshift-console-operator/console-operator-58897d9998-jdxz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821302 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-audit-policies\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821310 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-serving-cert\") pod \"route-controller-manager-6576b87f9c-tr24d\" (UID: \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821396 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dbtt\" (UniqueName: \"kubernetes.io/projected/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-kube-api-access-5dbtt\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821421 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821461 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvlpw\" (UniqueName: \"kubernetes.io/projected/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-kube-api-access-vvlpw\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821489 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821534 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cfa0ed4a-5d9c-4b54-b733-9a133db47307-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r8lh9\" (UID: \"cfa0ed4a-5d9c-4b54-b733-9a133db47307\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821556 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-audit-policies\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821573 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/997e7432-e74d-4f39-accd-a85b98f21978-stats-auth\") pod \"router-default-5444994796-9tfhr\" (UID: \"997e7432-e74d-4f39-accd-a85b98f21978\") " pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.821721 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822106 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ll95\" (UniqueName: \"kubernetes.io/projected/22530841-f07a-4811-bbdf-9964a1818e16-kube-api-access-5ll95\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822137 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cafb1f49-e8d2-42bb-b00c-c069f86db12c-metrics-tls\") pod \"dns-operator-744455d44c-pv9sc\" (UID: \"cafb1f49-e8d2-42bb-b00c-c069f86db12c\") " pod="openshift-dns-operator/dns-operator-744455d44c-pv9sc" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822158 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/961c22be-2ec9-4136-a19a-191ca2eab35b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-96lmw\" (UID: \"961c22be-2ec9-4136-a19a-191ca2eab35b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822187 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-config\") pod \"route-controller-manager-6576b87f9c-tr24d\" (UID: \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822218 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822237 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-audit-dir\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822255 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822273 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knqpf\" (UniqueName: \"kubernetes.io/projected/cafb1f49-e8d2-42bb-b00c-c069f86db12c-kube-api-access-knqpf\") pod \"dns-operator-744455d44c-pv9sc\" (UID: \"cafb1f49-e8d2-42bb-b00c-c069f86db12c\") " pod="openshift-dns-operator/dns-operator-744455d44c-pv9sc" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822289 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997e7432-e74d-4f39-accd-a85b98f21978-metrics-certs\") pod \"router-default-5444994796-9tfhr\" (UID: \"997e7432-e74d-4f39-accd-a85b98f21978\") " pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822304 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5rkc\" (UniqueName: \"kubernetes.io/projected/0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48-kube-api-access-z5rkc\") pod \"console-operator-58897d9998-jdxz6\" (UID: \"0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48\") " pod="openshift-console-operator/console-operator-58897d9998-jdxz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822322 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8zhg\" (UniqueName: \"kubernetes.io/projected/961c22be-2ec9-4136-a19a-191ca2eab35b-kube-api-access-l8zhg\") pod \"ingress-operator-5b745b69d9-96lmw\" (UID: \"961c22be-2ec9-4136-a19a-191ca2eab35b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822337 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822360 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac40e6bb-0637-4926-af5e-3ab0f13e0449-config\") pod \"machine-approver-56656f9798-9dm9b\" (UID: \"ac40e6bb-0637-4926-af5e-3ab0f13e0449\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822374 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/961c22be-2ec9-4136-a19a-191ca2eab35b-trusted-ca\") pod \"ingress-operator-5b745b69d9-96lmw\" (UID: \"961c22be-2ec9-4136-a19a-191ca2eab35b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822388 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/997e7432-e74d-4f39-accd-a85b98f21978-service-ca-bundle\") pod \"router-default-5444994796-9tfhr\" (UID: \"997e7432-e74d-4f39-accd-a85b98f21978\") " pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822407 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fx9c\" (UniqueName: \"kubernetes.io/projected/cfa0ed4a-5d9c-4b54-b733-9a133db47307-kube-api-access-9fx9c\") pod \"openshift-config-operator-7777fb866f-r8lh9\" (UID: \"cfa0ed4a-5d9c-4b54-b733-9a133db47307\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822422 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/961c22be-2ec9-4136-a19a-191ca2eab35b-metrics-tls\") pod \"ingress-operator-5b745b69d9-96lmw\" (UID: \"961c22be-2ec9-4136-a19a-191ca2eab35b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822437 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ac40e6bb-0637-4926-af5e-3ab0f13e0449-machine-approver-tls\") pod \"machine-approver-56656f9798-9dm9b\" (UID: \"ac40e6bb-0637-4926-af5e-3ab0f13e0449\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822479 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822518 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48-config\") pod \"console-operator-58897d9998-jdxz6\" (UID: \"0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48\") " pod="openshift-console-operator/console-operator-58897d9998-jdxz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822532 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.822574 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.823575 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-oauth-serving-cert\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.823656 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.823913 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.824268 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-console-serving-cert\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.824440 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7e4ee5d-670e-40e6-9102-e9766a492381-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j7skd\" (UID: \"c7e4ee5d-670e-40e6-9102-e9766a492381\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7skd" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.825344 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfa0ed4a-5d9c-4b54-b733-9a133db47307-serving-cert\") pod \"openshift-config-operator-7777fb866f-r8lh9\" (UID: \"cfa0ed4a-5d9c-4b54-b733-9a133db47307\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.825708 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-audit-dir\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.825780 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-etcd-client\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.825841 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-serving-cert\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.825946 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cfa0ed4a-5d9c-4b54-b733-9a133db47307-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r8lh9\" (UID: \"cfa0ed4a-5d9c-4b54-b733-9a133db47307\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.826122 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-client-ca\") pod \"route-controller-manager-6576b87f9c-tr24d\" (UID: \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.826156 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-audit-policies\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.826249 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-console-config\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.826264 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-console-oauth-config\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.827022 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/997e7432-e74d-4f39-accd-a85b98f21978-service-ca-bundle\") pod \"router-default-5444994796-9tfhr\" (UID: \"997e7432-e74d-4f39-accd-a85b98f21978\") " pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.827050 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.828448 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48-serving-cert\") pod \"console-operator-58897d9998-jdxz6\" (UID: \"0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48\") " pod="openshift-console-operator/console-operator-58897d9998-jdxz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.828569 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.827549 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48-trusted-ca\") pod \"console-operator-58897d9998-jdxz6\" (UID: \"0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48\") " pod="openshift-console-operator/console-operator-58897d9998-jdxz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.827717 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48-config\") pod \"console-operator-58897d9998-jdxz6\" (UID: \"0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48\") " pod="openshift-console-operator/console-operator-58897d9998-jdxz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.828337 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-serving-cert\") pod \"route-controller-manager-6576b87f9c-tr24d\" (UID: \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.828348 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac40e6bb-0637-4926-af5e-3ab0f13e0449-config\") pod \"machine-approver-56656f9798-9dm9b\" (UID: \"ac40e6bb-0637-4926-af5e-3ab0f13e0449\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.828445 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.827534 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/961c22be-2ec9-4136-a19a-191ca2eab35b-trusted-ca\") pod \"ingress-operator-5b745b69d9-96lmw\" (UID: \"961c22be-2ec9-4136-a19a-191ca2eab35b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.828267 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-encryption-config\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.829435 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-config\") pod \"route-controller-manager-6576b87f9c-tr24d\" (UID: \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.829668 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.829722 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ac40e6bb-0637-4926-af5e-3ab0f13e0449-machine-approver-tls\") pod \"machine-approver-56656f9798-9dm9b\" (UID: \"ac40e6bb-0637-4926-af5e-3ab0f13e0449\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.830297 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.830614 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.830878 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/997e7432-e74d-4f39-accd-a85b98f21978-stats-auth\") pod \"router-default-5444994796-9tfhr\" (UID: \"997e7432-e74d-4f39-accd-a85b98f21978\") " pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.830884 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.831081 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.831342 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-trusted-ca-bundle\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.831388 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/961c22be-2ec9-4136-a19a-191ca2eab35b-metrics-tls\") pod \"ingress-operator-5b745b69d9-96lmw\" (UID: \"961c22be-2ec9-4136-a19a-191ca2eab35b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.832043 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997e7432-e74d-4f39-accd-a85b98f21978-metrics-certs\") pod \"router-default-5444994796-9tfhr\" (UID: \"997e7432-e74d-4f39-accd-a85b98f21978\") " pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.833240 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.834464 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.841810 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/997e7432-e74d-4f39-accd-a85b98f21978-default-certificate\") pod \"router-default-5444994796-9tfhr\" (UID: \"997e7432-e74d-4f39-accd-a85b98f21978\") " pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.842176 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cafb1f49-e8d2-42bb-b00c-c069f86db12c-metrics-tls\") pod \"dns-operator-744455d44c-pv9sc\" (UID: \"cafb1f49-e8d2-42bb-b00c-c069f86db12c\") " pod="openshift-dns-operator/dns-operator-744455d44c-pv9sc" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.849317 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.870251 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.889674 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.908957 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.935943 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.949914 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.969007 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 20:57:36 crc kubenswrapper[4899]: I0126 20:57:36.989529 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.008873 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.028861 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.049419 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.069206 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.090130 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.109581 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.130102 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.150007 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.169538 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.190200 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.209901 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.229573 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.249747 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.270019 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.289325 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.309090 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.358967 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2vdp\" (UniqueName: \"kubernetes.io/projected/676ef23d-20dd-4ccb-b846-b83c71305d24-kube-api-access-d2vdp\") pod \"controller-manager-879f6c89f-dq8kh\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.388526 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpvxw\" (UniqueName: \"kubernetes.io/projected/e74d40cc-592a-4c5b-9f56-70f5657fc787-kube-api-access-gpvxw\") pod \"apiserver-76f77b778f-jtwht\" (UID: \"e74d40cc-592a-4c5b-9f56-70f5657fc787\") " pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.409842 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.429903 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.444127 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.450098 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.470287 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.493376 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.495344 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.510520 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.529995 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.550143 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.569638 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.591326 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.611265 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.629889 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.651201 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.667974 4899 request.go:700] Waited for 1.000056003s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.669323 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.689777 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.709244 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.729562 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.750074 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.768845 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.789282 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.808897 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.828830 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.849751 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.869346 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.889944 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.908879 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.920241 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dq8kh"] Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.924772 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jtwht"] Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.929732 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.950032 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.971661 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 20:57:37 crc kubenswrapper[4899]: I0126 20:57:37.989721 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.009208 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.028793 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.049471 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.069341 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.089486 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.109233 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.128836 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.149295 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.178270 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.188824 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.209119 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.229353 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.249407 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.268161 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.291273 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.310359 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.328720 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.349947 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.369081 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.390158 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.409005 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.429254 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.449919 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.469176 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.490118 4899 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.510115 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.531592 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.549528 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.568763 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.603236 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8v9j\" (UniqueName: \"kubernetes.io/projected/c7e4ee5d-670e-40e6-9102-e9766a492381-kube-api-access-z8v9j\") pod \"cluster-samples-operator-665b6dd947-j7skd\" (UID: \"c7e4ee5d-670e-40e6-9102-e9766a492381\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7skd" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.622818 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb5q4\" (UniqueName: \"kubernetes.io/projected/997e7432-e74d-4f39-accd-a85b98f21978-kube-api-access-hb5q4\") pod \"router-default-5444994796-9tfhr\" (UID: \"997e7432-e74d-4f39-accd-a85b98f21978\") " pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.653519 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pq54\" (UniqueName: \"kubernetes.io/projected/ac40e6bb-0637-4926-af5e-3ab0f13e0449-kube-api-access-7pq54\") pod \"machine-approver-56656f9798-9dm9b\" (UID: \"ac40e6bb-0637-4926-af5e-3ab0f13e0449\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.661965 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvlpw\" (UniqueName: \"kubernetes.io/projected/d605cb6d-937c-45d9-bc80-3a7b7ac58ca8-kube-api-access-vvlpw\") pod \"apiserver-7bbb656c7d-gvbz6\" (UID: \"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.681354 4899 generic.go:334] "Generic (PLEG): container finished" podID="e74d40cc-592a-4c5b-9f56-70f5657fc787" containerID="6f3251de7ab0327c58e1345267ddd0968e0443bce76b02c4ede42a5c88476f1f" exitCode=0 Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.681394 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jtwht" event={"ID":"e74d40cc-592a-4c5b-9f56-70f5657fc787","Type":"ContainerDied","Data":"6f3251de7ab0327c58e1345267ddd0968e0443bce76b02c4ede42a5c88476f1f"} Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.681424 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jtwht" event={"ID":"e74d40cc-592a-4c5b-9f56-70f5657fc787","Type":"ContainerStarted","Data":"1ffbc7f14ece6e2493f8713d279e6354229ac998fd3eac9ffa7b923b5ce1552a"} Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.682687 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" event={"ID":"676ef23d-20dd-4ccb-b846-b83c71305d24","Type":"ContainerStarted","Data":"1637720d41e277636e182aa0575a241dbfc4110e9c916ae08efecbaf34c0e0c4"} Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.682709 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" event={"ID":"676ef23d-20dd-4ccb-b846-b83c71305d24","Type":"ContainerStarted","Data":"0cf350b47bfbeb438572584b967e194d5b2569dee03f5ec43693e2d65d992af7"} Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.682888 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.687822 4899 request.go:700] Waited for 1.865485821s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.695735 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dbtt\" (UniqueName: \"kubernetes.io/projected/d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc-kube-api-access-5dbtt\") pod \"console-f9d7485db-jsrd8\" (UID: \"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc\") " pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.696679 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.718312 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkwx9\" (UniqueName: \"kubernetes.io/projected/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-kube-api-access-dkwx9\") pod \"route-controller-manager-6576b87f9c-tr24d\" (UID: \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.723660 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5rkc\" (UniqueName: \"kubernetes.io/projected/0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48-kube-api-access-z5rkc\") pod \"console-operator-58897d9998-jdxz6\" (UID: \"0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48\") " pod="openshift-console-operator/console-operator-58897d9998-jdxz6" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.749360 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fx9c\" (UniqueName: \"kubernetes.io/projected/cfa0ed4a-5d9c-4b54-b733-9a133db47307-kube-api-access-9fx9c\") pod \"openshift-config-operator-7777fb866f-r8lh9\" (UID: \"cfa0ed4a-5d9c-4b54-b733-9a133db47307\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.760715 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8zhg\" (UniqueName: \"kubernetes.io/projected/961c22be-2ec9-4136-a19a-191ca2eab35b-kube-api-access-l8zhg\") pod \"ingress-operator-5b745b69d9-96lmw\" (UID: \"961c22be-2ec9-4136-a19a-191ca2eab35b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.770368 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.770856 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.788648 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ll95\" (UniqueName: \"kubernetes.io/projected/22530841-f07a-4811-bbdf-9964a1818e16-kube-api-access-5ll95\") pod \"oauth-openshift-558db77b4-rq8lx\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.799149 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7skd" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.804456 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knqpf\" (UniqueName: \"kubernetes.io/projected/cafb1f49-e8d2-42bb-b00c-c069f86db12c-kube-api-access-knqpf\") pod \"dns-operator-744455d44c-pv9sc\" (UID: \"cafb1f49-e8d2-42bb-b00c-c069f86db12c\") " pod="openshift-dns-operator/dns-operator-744455d44c-pv9sc" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.807589 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.824176 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.825136 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/961c22be-2ec9-4136-a19a-191ca2eab35b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-96lmw\" (UID: \"961c22be-2ec9-4136-a19a-191ca2eab35b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.835366 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.841904 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pv9sc" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.848790 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:38 crc kubenswrapper[4899]: E0126 20:57:38.849054 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:59:40.849023919 +0000 UTC m=+270.230611956 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.854892 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jdxz6" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.860489 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.875690 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950055 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-registry-tls\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950085 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8086b1ce-02cb-465d-9191-ce5af96d2f7a-serving-cert\") pod \"authentication-operator-69f744f599-4v7dp\" (UID: \"8086b1ce-02cb-465d-9191-ce5af96d2f7a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950100 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-bound-sa-token\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950118 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d27v6\" (UniqueName: \"kubernetes.io/projected/bd047a32-c6c9-4376-a82b-514d2bfede44-kube-api-access-d27v6\") pod \"machine-config-controller-84d6567774-68k5w\" (UID: \"bd047a32-c6c9-4376-a82b-514d2bfede44\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950141 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950156 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd50c155-e6f3-437e-bd5a-672325cf782c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-cctdv\" (UID: \"fd50c155-e6f3-437e-bd5a-672325cf782c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950173 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71935a90-1ee3-448e-a8f6-7a370ef7062c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmtk7\" (UID: \"71935a90-1ee3-448e-a8f6-7a370ef7062c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950191 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95sjt\" (UniqueName: \"kubernetes.io/projected/79214ec9-11ec-4a5c-bfee-59ebe2caeeea-kube-api-access-95sjt\") pod \"multus-admission-controller-857f4d67dd-7jwnb\" (UID: \"79214ec9-11ec-4a5c-bfee-59ebe2caeeea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7jwnb" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950209 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8086b1ce-02cb-465d-9191-ce5af96d2f7a-config\") pod \"authentication-operator-69f744f599-4v7dp\" (UID: \"8086b1ce-02cb-465d-9191-ce5af96d2f7a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950224 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c60bec7a-6571-4594-a05f-4603f5959477-proxy-tls\") pod \"machine-config-operator-74547568cd-t2bcw\" (UID: \"c60bec7a-6571-4594-a05f-4603f5959477\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950239 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/79214ec9-11ec-4a5c-bfee-59ebe2caeeea-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7jwnb\" (UID: \"79214ec9-11ec-4a5c-bfee-59ebe2caeeea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7jwnb" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950255 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz562\" (UniqueName: \"kubernetes.io/projected/c60bec7a-6571-4594-a05f-4603f5959477-kube-api-access-rz562\") pod \"machine-config-operator-74547568cd-t2bcw\" (UID: \"c60bec7a-6571-4594-a05f-4603f5959477\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950437 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c60bec7a-6571-4594-a05f-4603f5959477-images\") pod \"machine-config-operator-74547568cd-t2bcw\" (UID: \"c60bec7a-6571-4594-a05f-4603f5959477\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950459 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8acf30a-687a-409e-a4ee-57d340449932-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h9sjz\" (UID: \"a8acf30a-687a-409e-a4ee-57d340449932\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950478 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgm77\" (UniqueName: \"kubernetes.io/projected/7390da62-a9fa-495a-8d5a-ed2c660337cf-kube-api-access-vgm77\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqmc2\" (UID: \"7390da62-a9fa-495a-8d5a-ed2c660337cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950493 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8086b1ce-02cb-465d-9191-ce5af96d2f7a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4v7dp\" (UID: \"8086b1ce-02cb-465d-9191-ce5af96d2f7a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950508 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71935a90-1ee3-448e-a8f6-7a370ef7062c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmtk7\" (UID: \"71935a90-1ee3-448e-a8f6-7a370ef7062c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950525 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-etcd-client\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950546 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-etcd-ca\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950576 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd047a32-c6c9-4376-a82b-514d2bfede44-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-68k5w\" (UID: \"bd047a32-c6c9-4376-a82b-514d2bfede44\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950591 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75860fb2-d5e0-449b-bd63-6f27e4a82a85-registry-certificates\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950606 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8086b1ce-02cb-465d-9191-ce5af96d2f7a-service-ca-bundle\") pod \"authentication-operator-69f744f599-4v7dp\" (UID: \"8086b1ce-02cb-465d-9191-ce5af96d2f7a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950622 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75860fb2-d5e0-449b-bd63-6f27e4a82a85-trusted-ca\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950638 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdmsh\" (UniqueName: \"kubernetes.io/projected/8bec087d-1164-43d3-b119-58a88e199403-kube-api-access-mdmsh\") pod \"downloads-7954f5f757-8vzmg\" (UID: \"8bec087d-1164-43d3-b119-58a88e199403\") " pod="openshift-console/downloads-7954f5f757-8vzmg" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950652 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-serving-cert\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950670 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950688 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wljn2\" (UniqueName: \"kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-kube-api-access-wljn2\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950704 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h52pl\" (UniqueName: \"kubernetes.io/projected/fd50c155-e6f3-437e-bd5a-672325cf782c-kube-api-access-h52pl\") pod \"kube-storage-version-migrator-operator-b67b599dd-cctdv\" (UID: \"fd50c155-e6f3-437e-bd5a-672325cf782c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950720 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c60bec7a-6571-4594-a05f-4603f5959477-auth-proxy-config\") pod \"machine-config-operator-74547568cd-t2bcw\" (UID: \"c60bec7a-6571-4594-a05f-4603f5959477\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950736 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950753 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71935a90-1ee3-448e-a8f6-7a370ef7062c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmtk7\" (UID: \"71935a90-1ee3-448e-a8f6-7a370ef7062c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950772 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8acf30a-687a-409e-a4ee-57d340449932-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h9sjz\" (UID: \"a8acf30a-687a-409e-a4ee-57d340449932\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950790 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390da62-a9fa-495a-8d5a-ed2c660337cf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqmc2\" (UID: \"7390da62-a9fa-495a-8d5a-ed2c660337cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950809 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950823 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e09a141-4aca-4102-8161-849997100ca4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-flwsz\" (UID: \"2e09a141-4aca-4102-8161-849997100ca4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950841 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fgc6\" (UniqueName: \"kubernetes.io/projected/8086b1ce-02cb-465d-9191-ce5af96d2f7a-kube-api-access-7fgc6\") pod \"authentication-operator-69f744f599-4v7dp\" (UID: \"8086b1ce-02cb-465d-9191-ce5af96d2f7a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950856 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-etcd-service-ca\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950872 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950887 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75860fb2-d5e0-449b-bd63-6f27e4a82a85-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950902 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-config\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950917 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkp48\" (UniqueName: \"kubernetes.io/projected/a8acf30a-687a-409e-a4ee-57d340449932-kube-api-access-qkp48\") pod \"openshift-apiserver-operator-796bbdcf4f-h9sjz\" (UID: \"a8acf30a-687a-409e-a4ee-57d340449932\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950953 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd50c155-e6f3-437e-bd5a-672325cf782c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-cctdv\" (UID: \"fd50c155-e6f3-437e-bd5a-672325cf782c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950970 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5800f5da-f007-4a93-ab2b-97912d369526-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cjdkv\" (UID: \"5800f5da-f007-4a93-ab2b-97912d369526\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.950987 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd047a32-c6c9-4376-a82b-514d2bfede44-proxy-tls\") pod \"machine-config-controller-84d6567774-68k5w\" (UID: \"bd047a32-c6c9-4376-a82b-514d2bfede44\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.951000 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75860fb2-d5e0-449b-bd63-6f27e4a82a85-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.951014 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e09a141-4aca-4102-8161-849997100ca4-config\") pod \"kube-apiserver-operator-766d6c64bb-flwsz\" (UID: \"2e09a141-4aca-4102-8161-849997100ca4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.951029 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b557j\" (UniqueName: \"kubernetes.io/projected/5800f5da-f007-4a93-ab2b-97912d369526-kube-api-access-b557j\") pod \"cluster-image-registry-operator-dc59b4c8b-cjdkv\" (UID: \"5800f5da-f007-4a93-ab2b-97912d369526\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.951048 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4k9t\" (UniqueName: \"kubernetes.io/projected/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-kube-api-access-l4k9t\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.951063 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7390da62-a9fa-495a-8d5a-ed2c660337cf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqmc2\" (UID: \"7390da62-a9fa-495a-8d5a-ed2c660337cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.951078 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e09a141-4aca-4102-8161-849997100ca4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-flwsz\" (UID: \"2e09a141-4aca-4102-8161-849997100ca4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.951094 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5800f5da-f007-4a93-ab2b-97912d369526-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cjdkv\" (UID: \"5800f5da-f007-4a93-ab2b-97912d369526\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.951109 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5800f5da-f007-4a93-ab2b-97912d369526-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cjdkv\" (UID: \"5800f5da-f007-4a93-ab2b-97912d369526\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" Jan 26 20:57:38 crc kubenswrapper[4899]: E0126 20:57:38.951657 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:39.451639355 +0000 UTC m=+148.833227462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.958852 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.959408 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.961277 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.976420 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:38 crc kubenswrapper[4899]: I0126 20:57:38.988306 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.054848 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:39 crc kubenswrapper[4899]: E0126 20:57:39.055066 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:39.555029166 +0000 UTC m=+148.936617213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055124 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksmjr\" (UniqueName: \"kubernetes.io/projected/11d88052-a254-4fc9-ab57-54bee461f27e-kube-api-access-ksmjr\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055167 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5-metrics-tls\") pod \"dns-default-nhxcw\" (UID: \"d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5\") " pod="openshift-dns/dns-default-nhxcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055192 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmhx5\" (UniqueName: \"kubernetes.io/projected/d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5-kube-api-access-bmhx5\") pod \"dns-default-nhxcw\" (UID: \"d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5\") " pod="openshift-dns/dns-default-nhxcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055213 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/50718f10-624f-4611-a5ac-d19a63806946-tmpfs\") pod \"packageserver-d55dfcdfc-hccs4\" (UID: \"50718f10-624f-4611-a5ac-d19a63806946\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055257 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/27e7ffa3-46b0-4531-8bc9-45a93d9efafd-signing-key\") pod \"service-ca-9c57cc56f-cqz7t\" (UID: \"27e7ffa3-46b0-4531-8bc9-45a93d9efafd\") " pod="openshift-service-ca/service-ca-9c57cc56f-cqz7t" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055311 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd047a32-c6c9-4376-a82b-514d2bfede44-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-68k5w\" (UID: \"bd047a32-c6c9-4376-a82b-514d2bfede44\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055351 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75860fb2-d5e0-449b-bd63-6f27e4a82a85-registry-certificates\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055377 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9hwv\" (UniqueName: \"kubernetes.io/projected/fcf5a119-adec-45a7-bd1d-758f6c1d62ac-kube-api-access-k9hwv\") pod \"package-server-manager-789f6589d5-h5tpt\" (UID: \"fcf5a119-adec-45a7-bd1d-758f6c1d62ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055415 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75860fb2-d5e0-449b-bd63-6f27e4a82a85-trusted-ca\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055440 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdmsh\" (UniqueName: \"kubernetes.io/projected/8bec087d-1164-43d3-b119-58a88e199403-kube-api-access-mdmsh\") pod \"downloads-7954f5f757-8vzmg\" (UID: \"8bec087d-1164-43d3-b119-58a88e199403\") " pod="openshift-console/downloads-7954f5f757-8vzmg" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055464 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8086b1ce-02cb-465d-9191-ce5af96d2f7a-service-ca-bundle\") pod \"authentication-operator-69f744f599-4v7dp\" (UID: \"8086b1ce-02cb-465d-9191-ce5af96d2f7a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055486 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b1684954-64a8-4748-89f9-31c6386cd712-certs\") pod \"machine-config-server-rjk22\" (UID: \"b1684954-64a8-4748-89f9-31c6386cd712\") " pod="openshift-machine-config-operator/machine-config-server-rjk22" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055533 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/11d88052-a254-4fc9-ab57-54bee461f27e-csi-data-dir\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055567 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b1684954-64a8-4748-89f9-31c6386cd712-node-bootstrap-token\") pod \"machine-config-server-rjk22\" (UID: \"b1684954-64a8-4748-89f9-31c6386cd712\") " pod="openshift-machine-config-operator/machine-config-server-rjk22" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055587 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj4zd\" (UniqueName: \"kubernetes.io/projected/b1684954-64a8-4748-89f9-31c6386cd712-kube-api-access-dj4zd\") pod \"machine-config-server-rjk22\" (UID: \"b1684954-64a8-4748-89f9-31c6386cd712\") " pod="openshift-machine-config-operator/machine-config-server-rjk22" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055609 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-serving-cert\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055635 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/53f1cb30-6429-4ebc-8301-5f1de3e70611-images\") pod \"machine-api-operator-5694c8668f-lxbfv\" (UID: \"53f1cb30-6429-4ebc-8301-5f1de3e70611\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055657 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5f0fd389-9264-494e-a44d-7290896b12b4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-v6fl4\" (UID: \"5f0fd389-9264-494e-a44d-7290896b12b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055726 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/53f1cb30-6429-4ebc-8301-5f1de3e70611-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lxbfv\" (UID: \"53f1cb30-6429-4ebc-8301-5f1de3e70611\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055780 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50718f10-624f-4611-a5ac-d19a63806946-webhook-cert\") pod \"packageserver-d55dfcdfc-hccs4\" (UID: \"50718f10-624f-4611-a5ac-d19a63806946\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055808 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wljn2\" (UniqueName: \"kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-kube-api-access-wljn2\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055832 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msvxf\" (UniqueName: \"kubernetes.io/projected/86c5e568-89ec-459d-bec4-8b2c0f075531-kube-api-access-msvxf\") pod \"service-ca-operator-777779d784-hvmc5\" (UID: \"86c5e568-89ec-459d-bec4-8b2c0f075531\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055866 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5-config-volume\") pod \"dns-default-nhxcw\" (UID: \"d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5\") " pod="openshift-dns/dns-default-nhxcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055960 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9803ca87-d488-4845-b59d-f928fa6e45f6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n7jfd\" (UID: \"9803ca87-d488-4845-b59d-f928fa6e45f6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.055987 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/11d88052-a254-4fc9-ab57-54bee461f27e-plugins-dir\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056013 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h52pl\" (UniqueName: \"kubernetes.io/projected/fd50c155-e6f3-437e-bd5a-672325cf782c-kube-api-access-h52pl\") pod \"kube-storage-version-migrator-operator-b67b599dd-cctdv\" (UID: \"fd50c155-e6f3-437e-bd5a-672325cf782c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056037 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86c5e568-89ec-459d-bec4-8b2c0f075531-config\") pod \"service-ca-operator-777779d784-hvmc5\" (UID: \"86c5e568-89ec-459d-bec4-8b2c0f075531\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056075 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c60bec7a-6571-4594-a05f-4603f5959477-auth-proxy-config\") pod \"machine-config-operator-74547568cd-t2bcw\" (UID: \"c60bec7a-6571-4594-a05f-4603f5959477\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056132 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71935a90-1ee3-448e-a8f6-7a370ef7062c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmtk7\" (UID: \"71935a90-1ee3-448e-a8f6-7a370ef7062c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056149 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd047a32-c6c9-4376-a82b-514d2bfede44-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-68k5w\" (UID: \"bd047a32-c6c9-4376-a82b-514d2bfede44\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056172 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8acf30a-687a-409e-a4ee-57d340449932-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h9sjz\" (UID: \"a8acf30a-687a-409e-a4ee-57d340449932\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056269 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390da62-a9fa-495a-8d5a-ed2c660337cf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqmc2\" (UID: \"7390da62-a9fa-495a-8d5a-ed2c660337cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056289 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c98d3776-03b4-4c7c-b106-4ca47db60dac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xl68z\" (UID: \"c98d3776-03b4-4c7c-b106-4ca47db60dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056315 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e09a141-4aca-4102-8161-849997100ca4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-flwsz\" (UID: \"2e09a141-4aca-4102-8161-849997100ca4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056344 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fgc6\" (UniqueName: \"kubernetes.io/projected/8086b1ce-02cb-465d-9191-ce5af96d2f7a-kube-api-access-7fgc6\") pod \"authentication-operator-69f744f599-4v7dp\" (UID: \"8086b1ce-02cb-465d-9191-ce5af96d2f7a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056363 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9803ca87-d488-4845-b59d-f928fa6e45f6-config\") pod \"kube-controller-manager-operator-78b949d7b-n7jfd\" (UID: \"9803ca87-d488-4845-b59d-f928fa6e45f6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056401 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53f1cb30-6429-4ebc-8301-5f1de3e70611-config\") pod \"machine-api-operator-5694c8668f-lxbfv\" (UID: \"53f1cb30-6429-4ebc-8301-5f1de3e70611\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056416 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5f0fd389-9264-494e-a44d-7290896b12b4-srv-cert\") pod \"olm-operator-6b444d44fb-v6fl4\" (UID: \"5f0fd389-9264-494e-a44d-7290896b12b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056433 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50718f10-624f-4611-a5ac-d19a63806946-apiservice-cert\") pod \"packageserver-d55dfcdfc-hccs4\" (UID: \"50718f10-624f-4611-a5ac-d19a63806946\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056449 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75860fb2-d5e0-449b-bd63-6f27e4a82a85-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056464 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-etcd-service-ca\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056480 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd50c155-e6f3-437e-bd5a-672325cf782c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-cctdv\" (UID: \"fd50c155-e6f3-437e-bd5a-672325cf782c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056496 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5800f5da-f007-4a93-ab2b-97912d369526-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cjdkv\" (UID: \"5800f5da-f007-4a93-ab2b-97912d369526\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056511 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-config\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056527 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkp48\" (UniqueName: \"kubernetes.io/projected/a8acf30a-687a-409e-a4ee-57d340449932-kube-api-access-qkp48\") pod \"openshift-apiserver-operator-796bbdcf4f-h9sjz\" (UID: \"a8acf30a-687a-409e-a4ee-57d340449932\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056544 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9803ca87-d488-4845-b59d-f928fa6e45f6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n7jfd\" (UID: \"9803ca87-d488-4845-b59d-f928fa6e45f6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056585 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd047a32-c6c9-4376-a82b-514d2bfede44-proxy-tls\") pod \"machine-config-controller-84d6567774-68k5w\" (UID: \"bd047a32-c6c9-4376-a82b-514d2bfede44\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056611 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75860fb2-d5e0-449b-bd63-6f27e4a82a85-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056660 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e09a141-4aca-4102-8161-849997100ca4-config\") pod \"kube-apiserver-operator-766d6c64bb-flwsz\" (UID: \"2e09a141-4aca-4102-8161-849997100ca4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056688 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5ngd\" (UniqueName: \"kubernetes.io/projected/9aed051f-c6e6-4694-8a2a-065e5dd6efa4-kube-api-access-d5ngd\") pod \"catalog-operator-68c6474976-hwlnd\" (UID: \"9aed051f-c6e6-4694-8a2a-065e5dd6efa4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056703 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhsnw\" (UniqueName: \"kubernetes.io/projected/50718f10-624f-4611-a5ac-d19a63806946-kube-api-access-fhsnw\") pod \"packageserver-d55dfcdfc-hccs4\" (UID: \"50718f10-624f-4611-a5ac-d19a63806946\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056733 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4k9t\" (UniqueName: \"kubernetes.io/projected/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-kube-api-access-l4k9t\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056748 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b557j\" (UniqueName: \"kubernetes.io/projected/5800f5da-f007-4a93-ab2b-97912d369526-kube-api-access-b557j\") pod \"cluster-image-registry-operator-dc59b4c8b-cjdkv\" (UID: \"5800f5da-f007-4a93-ab2b-97912d369526\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056764 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd7lh\" (UniqueName: \"kubernetes.io/projected/5f0fd389-9264-494e-a44d-7290896b12b4-kube-api-access-cd7lh\") pod \"olm-operator-6b444d44fb-v6fl4\" (UID: \"5f0fd389-9264-494e-a44d-7290896b12b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056799 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9aed051f-c6e6-4694-8a2a-065e5dd6efa4-srv-cert\") pod \"catalog-operator-68c6474976-hwlnd\" (UID: \"9aed051f-c6e6-4694-8a2a-065e5dd6efa4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056815 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7ba54a4-6433-42a2-9bb6-2fe7ff1ca1f6-cert\") pod \"ingress-canary-spqzr\" (UID: \"f7ba54a4-6433-42a2-9bb6-2fe7ff1ca1f6\") " pod="openshift-ingress-canary/ingress-canary-spqzr" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056831 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7390da62-a9fa-495a-8d5a-ed2c660337cf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqmc2\" (UID: \"7390da62-a9fa-495a-8d5a-ed2c660337cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056847 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e09a141-4aca-4102-8161-849997100ca4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-flwsz\" (UID: \"2e09a141-4aca-4102-8161-849997100ca4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056863 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c98d3776-03b4-4c7c-b106-4ca47db60dac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xl68z\" (UID: \"c98d3776-03b4-4c7c-b106-4ca47db60dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056880 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5800f5da-f007-4a93-ab2b-97912d369526-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cjdkv\" (UID: \"5800f5da-f007-4a93-ab2b-97912d369526\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056906 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5800f5da-f007-4a93-ab2b-97912d369526-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cjdkv\" (UID: \"5800f5da-f007-4a93-ab2b-97912d369526\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056961 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7btkm\" (UniqueName: \"kubernetes.io/projected/f7ba54a4-6433-42a2-9bb6-2fe7ff1ca1f6-kube-api-access-7btkm\") pod \"ingress-canary-spqzr\" (UID: \"f7ba54a4-6433-42a2-9bb6-2fe7ff1ca1f6\") " pod="openshift-ingress-canary/ingress-canary-spqzr" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.056988 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-registry-tls\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057024 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzzm7\" (UniqueName: \"kubernetes.io/projected/27e7ffa3-46b0-4531-8bc9-45a93d9efafd-kube-api-access-vzzm7\") pod \"service-ca-9c57cc56f-cqz7t\" (UID: \"27e7ffa3-46b0-4531-8bc9-45a93d9efafd\") " pod="openshift-service-ca/service-ca-9c57cc56f-cqz7t" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057040 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/11d88052-a254-4fc9-ab57-54bee461f27e-mountpoint-dir\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057057 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ce93bf9-c281-45cc-9697-8a6a8eb9d6e0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5kjdm\" (UID: \"7ce93bf9-c281-45cc-9697-8a6a8eb9d6e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5kjdm" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057084 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8086b1ce-02cb-465d-9191-ce5af96d2f7a-serving-cert\") pod \"authentication-operator-69f744f599-4v7dp\" (UID: \"8086b1ce-02cb-465d-9191-ce5af96d2f7a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057100 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll9pd\" (UniqueName: \"kubernetes.io/projected/6ba045ff-8d96-4b64-819d-9de471453463-kube-api-access-ll9pd\") pod \"migrator-59844c95c7-plssq\" (UID: \"6ba045ff-8d96-4b64-819d-9de471453463\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plssq" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057121 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-bound-sa-token\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057138 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d27v6\" (UniqueName: \"kubernetes.io/projected/bd047a32-c6c9-4376-a82b-514d2bfede44-kube-api-access-d27v6\") pod \"machine-config-controller-84d6567774-68k5w\" (UID: \"bd047a32-c6c9-4376-a82b-514d2bfede44\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057164 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f46s\" (UniqueName: \"kubernetes.io/projected/c98d3776-03b4-4c7c-b106-4ca47db60dac-kube-api-access-4f46s\") pod \"marketplace-operator-79b997595-xl68z\" (UID: \"c98d3776-03b4-4c7c-b106-4ca47db60dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057179 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/11d88052-a254-4fc9-ab57-54bee461f27e-socket-dir\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057212 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057231 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd50c155-e6f3-437e-bd5a-672325cf782c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-cctdv\" (UID: \"fd50c155-e6f3-437e-bd5a-672325cf782c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057268 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95sjt\" (UniqueName: \"kubernetes.io/projected/79214ec9-11ec-4a5c-bfee-59ebe2caeeea-kube-api-access-95sjt\") pod \"multus-admission-controller-857f4d67dd-7jwnb\" (UID: \"79214ec9-11ec-4a5c-bfee-59ebe2caeeea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7jwnb" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057284 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8086b1ce-02cb-465d-9191-ce5af96d2f7a-config\") pod \"authentication-operator-69f744f599-4v7dp\" (UID: \"8086b1ce-02cb-465d-9191-ce5af96d2f7a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057302 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c60bec7a-6571-4594-a05f-4603f5959477-proxy-tls\") pod \"machine-config-operator-74547568cd-t2bcw\" (UID: \"c60bec7a-6571-4594-a05f-4603f5959477\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057318 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpfgc\" (UniqueName: \"kubernetes.io/projected/53f1cb30-6429-4ebc-8301-5f1de3e70611-kube-api-access-dpfgc\") pod \"machine-api-operator-5694c8668f-lxbfv\" (UID: \"53f1cb30-6429-4ebc-8301-5f1de3e70611\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057335 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71935a90-1ee3-448e-a8f6-7a370ef7062c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmtk7\" (UID: \"71935a90-1ee3-448e-a8f6-7a370ef7062c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057365 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzwkj\" (UniqueName: \"kubernetes.io/projected/7ce93bf9-c281-45cc-9697-8a6a8eb9d6e0-kube-api-access-nzwkj\") pod \"control-plane-machine-set-operator-78cbb6b69f-5kjdm\" (UID: \"7ce93bf9-c281-45cc-9697-8a6a8eb9d6e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5kjdm" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057411 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9aed051f-c6e6-4694-8a2a-065e5dd6efa4-profile-collector-cert\") pod \"catalog-operator-68c6474976-hwlnd\" (UID: \"9aed051f-c6e6-4694-8a2a-065e5dd6efa4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057438 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/79214ec9-11ec-4a5c-bfee-59ebe2caeeea-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7jwnb\" (UID: \"79214ec9-11ec-4a5c-bfee-59ebe2caeeea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7jwnb" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057455 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz562\" (UniqueName: \"kubernetes.io/projected/c60bec7a-6571-4594-a05f-4603f5959477-kube-api-access-rz562\") pod \"machine-config-operator-74547568cd-t2bcw\" (UID: \"c60bec7a-6571-4594-a05f-4603f5959477\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057509 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2ltv\" (UniqueName: \"kubernetes.io/projected/55013211-6291-4060-b512-07030b99b897-kube-api-access-b2ltv\") pod \"collect-profiles-29491005-n4n9h\" (UID: \"55013211-6291-4060-b512-07030b99b897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057548 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55013211-6291-4060-b512-07030b99b897-secret-volume\") pod \"collect-profiles-29491005-n4n9h\" (UID: \"55013211-6291-4060-b512-07030b99b897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057573 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/11d88052-a254-4fc9-ab57-54bee461f27e-registration-dir\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057591 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c60bec7a-6571-4594-a05f-4603f5959477-images\") pod \"machine-config-operator-74547568cd-t2bcw\" (UID: \"c60bec7a-6571-4594-a05f-4603f5959477\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.058180 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75860fb2-d5e0-449b-bd63-6f27e4a82a85-trusted-ca\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.058828 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8acf30a-687a-409e-a4ee-57d340449932-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h9sjz\" (UID: \"a8acf30a-687a-409e-a4ee-57d340449932\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.059000 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8086b1ce-02cb-465d-9191-ce5af96d2f7a-service-ca-bundle\") pod \"authentication-operator-69f744f599-4v7dp\" (UID: \"8086b1ce-02cb-465d-9191-ce5af96d2f7a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.061297 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7390da62-a9fa-495a-8d5a-ed2c660337cf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqmc2\" (UID: \"7390da62-a9fa-495a-8d5a-ed2c660337cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.062378 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5800f5da-f007-4a93-ab2b-97912d369526-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cjdkv\" (UID: \"5800f5da-f007-4a93-ab2b-97912d369526\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.057317 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75860fb2-d5e0-449b-bd63-6f27e4a82a85-registry-certificates\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.063605 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71935a90-1ee3-448e-a8f6-7a370ef7062c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmtk7\" (UID: \"71935a90-1ee3-448e-a8f6-7a370ef7062c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.063722 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-etcd-service-ca\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.067768 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-config\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.069210 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e09a141-4aca-4102-8161-849997100ca4-config\") pod \"kube-apiserver-operator-766d6c64bb-flwsz\" (UID: \"2e09a141-4aca-4102-8161-849997100ca4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.069467 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8086b1ce-02cb-465d-9191-ce5af96d2f7a-config\") pod \"authentication-operator-69f744f599-4v7dp\" (UID: \"8086b1ce-02cb-465d-9191-ce5af96d2f7a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.069492 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-serving-cert\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.070064 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75860fb2-d5e0-449b-bd63-6f27e4a82a85-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.071235 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c60bec7a-6571-4594-a05f-4603f5959477-auth-proxy-config\") pod \"machine-config-operator-74547568cd-t2bcw\" (UID: \"c60bec7a-6571-4594-a05f-4603f5959477\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" Jan 26 20:57:39 crc kubenswrapper[4899]: E0126 20:57:39.071611 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:39.571595792 +0000 UTC m=+148.953183869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.072478 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c60bec7a-6571-4594-a05f-4603f5959477-images\") pod \"machine-config-operator-74547568cd-t2bcw\" (UID: \"c60bec7a-6571-4594-a05f-4603f5959477\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.073135 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd50c155-e6f3-437e-bd5a-672325cf782c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-cctdv\" (UID: \"fd50c155-e6f3-437e-bd5a-672325cf782c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.073230 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8acf30a-687a-409e-a4ee-57d340449932-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h9sjz\" (UID: \"a8acf30a-687a-409e-a4ee-57d340449932\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.073329 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgm77\" (UniqueName: \"kubernetes.io/projected/7390da62-a9fa-495a-8d5a-ed2c660337cf-kube-api-access-vgm77\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqmc2\" (UID: \"7390da62-a9fa-495a-8d5a-ed2c660337cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.073356 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8086b1ce-02cb-465d-9191-ce5af96d2f7a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4v7dp\" (UID: \"8086b1ce-02cb-465d-9191-ce5af96d2f7a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.073383 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55013211-6291-4060-b512-07030b99b897-config-volume\") pod \"collect-profiles-29491005-n4n9h\" (UID: \"55013211-6291-4060-b512-07030b99b897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.073483 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-etcd-client\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.073509 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71935a90-1ee3-448e-a8f6-7a370ef7062c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmtk7\" (UID: \"71935a90-1ee3-448e-a8f6-7a370ef7062c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.073655 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-etcd-ca\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.073682 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86c5e568-89ec-459d-bec4-8b2c0f075531-serving-cert\") pod \"service-ca-operator-777779d784-hvmc5\" (UID: \"86c5e568-89ec-459d-bec4-8b2c0f075531\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.073708 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcf5a119-adec-45a7-bd1d-758f6c1d62ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-h5tpt\" (UID: \"fcf5a119-adec-45a7-bd1d-758f6c1d62ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.073746 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/27e7ffa3-46b0-4531-8bc9-45a93d9efafd-signing-cabundle\") pod \"service-ca-9c57cc56f-cqz7t\" (UID: \"27e7ffa3-46b0-4531-8bc9-45a93d9efafd\") " pod="openshift-service-ca/service-ca-9c57cc56f-cqz7t" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.074623 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8086b1ce-02cb-465d-9191-ce5af96d2f7a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4v7dp\" (UID: \"8086b1ce-02cb-465d-9191-ce5af96d2f7a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.079439 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-etcd-client\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.080209 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.081131 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71935a90-1ee3-448e-a8f6-7a370ef7062c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmtk7\" (UID: \"71935a90-1ee3-448e-a8f6-7a370ef7062c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.081586 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5800f5da-f007-4a93-ab2b-97912d369526-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cjdkv\" (UID: \"5800f5da-f007-4a93-ab2b-97912d369526\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.082364 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-registry-tls\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.083316 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c60bec7a-6571-4594-a05f-4603f5959477-proxy-tls\") pod \"machine-config-operator-74547568cd-t2bcw\" (UID: \"c60bec7a-6571-4594-a05f-4603f5959477\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.083577 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-etcd-ca\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.086158 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd50c155-e6f3-437e-bd5a-672325cf782c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-cctdv\" (UID: \"fd50c155-e6f3-437e-bd5a-672325cf782c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.086589 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390da62-a9fa-495a-8d5a-ed2c660337cf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqmc2\" (UID: \"7390da62-a9fa-495a-8d5a-ed2c660337cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.086647 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8acf30a-687a-409e-a4ee-57d340449932-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h9sjz\" (UID: \"a8acf30a-687a-409e-a4ee-57d340449932\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.088408 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75860fb2-d5e0-449b-bd63-6f27e4a82a85-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.091088 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/79214ec9-11ec-4a5c-bfee-59ebe2caeeea-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7jwnb\" (UID: \"79214ec9-11ec-4a5c-bfee-59ebe2caeeea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7jwnb" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.091384 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e09a141-4aca-4102-8161-849997100ca4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-flwsz\" (UID: \"2e09a141-4aca-4102-8161-849997100ca4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.114465 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd047a32-c6c9-4376-a82b-514d2bfede44-proxy-tls\") pod \"machine-config-controller-84d6567774-68k5w\" (UID: \"bd047a32-c6c9-4376-a82b-514d2bfede44\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.117269 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8086b1ce-02cb-465d-9191-ce5af96d2f7a-serving-cert\") pod \"authentication-operator-69f744f599-4v7dp\" (UID: \"8086b1ce-02cb-465d-9191-ce5af96d2f7a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.138039 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdmsh\" (UniqueName: \"kubernetes.io/projected/8bec087d-1164-43d3-b119-58a88e199403-kube-api-access-mdmsh\") pod \"downloads-7954f5f757-8vzmg\" (UID: \"8bec087d-1164-43d3-b119-58a88e199403\") " pod="openshift-console/downloads-7954f5f757-8vzmg" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.149151 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4k9t\" (UniqueName: \"kubernetes.io/projected/edfc8cc3-a964-4fa5-9ddf-fb15d33a236b-kube-api-access-l4k9t\") pod \"etcd-operator-b45778765-zt55n\" (UID: \"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.153902 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b557j\" (UniqueName: \"kubernetes.io/projected/5800f5da-f007-4a93-ab2b-97912d369526-kube-api-access-b557j\") pod \"cluster-image-registry-operator-dc59b4c8b-cjdkv\" (UID: \"5800f5da-f007-4a93-ab2b-97912d369526\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.168271 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.181611 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.181737 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzzm7\" (UniqueName: \"kubernetes.io/projected/27e7ffa3-46b0-4531-8bc9-45a93d9efafd-kube-api-access-vzzm7\") pod \"service-ca-9c57cc56f-cqz7t\" (UID: \"27e7ffa3-46b0-4531-8bc9-45a93d9efafd\") " pod="openshift-service-ca/service-ca-9c57cc56f-cqz7t" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.181756 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/11d88052-a254-4fc9-ab57-54bee461f27e-mountpoint-dir\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.181772 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ce93bf9-c281-45cc-9697-8a6a8eb9d6e0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5kjdm\" (UID: \"7ce93bf9-c281-45cc-9697-8a6a8eb9d6e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5kjdm" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.181791 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll9pd\" (UniqueName: \"kubernetes.io/projected/6ba045ff-8d96-4b64-819d-9de471453463-kube-api-access-ll9pd\") pod \"migrator-59844c95c7-plssq\" (UID: \"6ba045ff-8d96-4b64-819d-9de471453463\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plssq" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.181823 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f46s\" (UniqueName: \"kubernetes.io/projected/c98d3776-03b4-4c7c-b106-4ca47db60dac-kube-api-access-4f46s\") pod \"marketplace-operator-79b997595-xl68z\" (UID: \"c98d3776-03b4-4c7c-b106-4ca47db60dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.181839 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/11d88052-a254-4fc9-ab57-54bee461f27e-socket-dir\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.181869 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpfgc\" (UniqueName: \"kubernetes.io/projected/53f1cb30-6429-4ebc-8301-5f1de3e70611-kube-api-access-dpfgc\") pod \"machine-api-operator-5694c8668f-lxbfv\" (UID: \"53f1cb30-6429-4ebc-8301-5f1de3e70611\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.181892 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzwkj\" (UniqueName: \"kubernetes.io/projected/7ce93bf9-c281-45cc-9697-8a6a8eb9d6e0-kube-api-access-nzwkj\") pod \"control-plane-machine-set-operator-78cbb6b69f-5kjdm\" (UID: \"7ce93bf9-c281-45cc-9697-8a6a8eb9d6e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5kjdm" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.181908 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9aed051f-c6e6-4694-8a2a-065e5dd6efa4-profile-collector-cert\") pod \"catalog-operator-68c6474976-hwlnd\" (UID: \"9aed051f-c6e6-4694-8a2a-065e5dd6efa4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.181948 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2ltv\" (UniqueName: \"kubernetes.io/projected/55013211-6291-4060-b512-07030b99b897-kube-api-access-b2ltv\") pod \"collect-profiles-29491005-n4n9h\" (UID: \"55013211-6291-4060-b512-07030b99b897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" Jan 26 20:57:39 crc kubenswrapper[4899]: E0126 20:57:39.181990 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:39.68197156 +0000 UTC m=+149.063559597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182037 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55013211-6291-4060-b512-07030b99b897-secret-volume\") pod \"collect-profiles-29491005-n4n9h\" (UID: \"55013211-6291-4060-b512-07030b99b897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182065 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/11d88052-a254-4fc9-ab57-54bee461f27e-registration-dir\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182091 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/11d88052-a254-4fc9-ab57-54bee461f27e-mountpoint-dir\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182098 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55013211-6291-4060-b512-07030b99b897-config-volume\") pod \"collect-profiles-29491005-n4n9h\" (UID: \"55013211-6291-4060-b512-07030b99b897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182134 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86c5e568-89ec-459d-bec4-8b2c0f075531-serving-cert\") pod \"service-ca-operator-777779d784-hvmc5\" (UID: \"86c5e568-89ec-459d-bec4-8b2c0f075531\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182151 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcf5a119-adec-45a7-bd1d-758f6c1d62ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-h5tpt\" (UID: \"fcf5a119-adec-45a7-bd1d-758f6c1d62ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182175 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/27e7ffa3-46b0-4531-8bc9-45a93d9efafd-signing-cabundle\") pod \"service-ca-9c57cc56f-cqz7t\" (UID: \"27e7ffa3-46b0-4531-8bc9-45a93d9efafd\") " pod="openshift-service-ca/service-ca-9c57cc56f-cqz7t" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182193 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksmjr\" (UniqueName: \"kubernetes.io/projected/11d88052-a254-4fc9-ab57-54bee461f27e-kube-api-access-ksmjr\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182207 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5-metrics-tls\") pod \"dns-default-nhxcw\" (UID: \"d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5\") " pod="openshift-dns/dns-default-nhxcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182224 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmhx5\" (UniqueName: \"kubernetes.io/projected/d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5-kube-api-access-bmhx5\") pod \"dns-default-nhxcw\" (UID: \"d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5\") " pod="openshift-dns/dns-default-nhxcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182240 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/50718f10-624f-4611-a5ac-d19a63806946-tmpfs\") pod \"packageserver-d55dfcdfc-hccs4\" (UID: \"50718f10-624f-4611-a5ac-d19a63806946\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182255 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/27e7ffa3-46b0-4531-8bc9-45a93d9efafd-signing-key\") pod \"service-ca-9c57cc56f-cqz7t\" (UID: \"27e7ffa3-46b0-4531-8bc9-45a93d9efafd\") " pod="openshift-service-ca/service-ca-9c57cc56f-cqz7t" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182273 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9hwv\" (UniqueName: \"kubernetes.io/projected/fcf5a119-adec-45a7-bd1d-758f6c1d62ac-kube-api-access-k9hwv\") pod \"package-server-manager-789f6589d5-h5tpt\" (UID: \"fcf5a119-adec-45a7-bd1d-758f6c1d62ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182291 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b1684954-64a8-4748-89f9-31c6386cd712-certs\") pod \"machine-config-server-rjk22\" (UID: \"b1684954-64a8-4748-89f9-31c6386cd712\") " pod="openshift-machine-config-operator/machine-config-server-rjk22" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182306 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/11d88052-a254-4fc9-ab57-54bee461f27e-csi-data-dir\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182320 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b1684954-64a8-4748-89f9-31c6386cd712-node-bootstrap-token\") pod \"machine-config-server-rjk22\" (UID: \"b1684954-64a8-4748-89f9-31c6386cd712\") " pod="openshift-machine-config-operator/machine-config-server-rjk22" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182335 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj4zd\" (UniqueName: \"kubernetes.io/projected/b1684954-64a8-4748-89f9-31c6386cd712-kube-api-access-dj4zd\") pod \"machine-config-server-rjk22\" (UID: \"b1684954-64a8-4748-89f9-31c6386cd712\") " pod="openshift-machine-config-operator/machine-config-server-rjk22" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182353 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/53f1cb30-6429-4ebc-8301-5f1de3e70611-images\") pod \"machine-api-operator-5694c8668f-lxbfv\" (UID: \"53f1cb30-6429-4ebc-8301-5f1de3e70611\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182369 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5f0fd389-9264-494e-a44d-7290896b12b4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-v6fl4\" (UID: \"5f0fd389-9264-494e-a44d-7290896b12b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182387 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/53f1cb30-6429-4ebc-8301-5f1de3e70611-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lxbfv\" (UID: \"53f1cb30-6429-4ebc-8301-5f1de3e70611\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182411 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50718f10-624f-4611-a5ac-d19a63806946-webhook-cert\") pod \"packageserver-d55dfcdfc-hccs4\" (UID: \"50718f10-624f-4611-a5ac-d19a63806946\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182434 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msvxf\" (UniqueName: \"kubernetes.io/projected/86c5e568-89ec-459d-bec4-8b2c0f075531-kube-api-access-msvxf\") pod \"service-ca-operator-777779d784-hvmc5\" (UID: \"86c5e568-89ec-459d-bec4-8b2c0f075531\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182449 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5-config-volume\") pod \"dns-default-nhxcw\" (UID: \"d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5\") " pod="openshift-dns/dns-default-nhxcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182467 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9803ca87-d488-4845-b59d-f928fa6e45f6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n7jfd\" (UID: \"9803ca87-d488-4845-b59d-f928fa6e45f6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182483 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/11d88052-a254-4fc9-ab57-54bee461f27e-plugins-dir\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182504 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86c5e568-89ec-459d-bec4-8b2c0f075531-config\") pod \"service-ca-operator-777779d784-hvmc5\" (UID: \"86c5e568-89ec-459d-bec4-8b2c0f075531\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182534 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c98d3776-03b4-4c7c-b106-4ca47db60dac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xl68z\" (UID: \"c98d3776-03b4-4c7c-b106-4ca47db60dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182562 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9803ca87-d488-4845-b59d-f928fa6e45f6-config\") pod \"kube-controller-manager-operator-78b949d7b-n7jfd\" (UID: \"9803ca87-d488-4845-b59d-f928fa6e45f6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182583 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53f1cb30-6429-4ebc-8301-5f1de3e70611-config\") pod \"machine-api-operator-5694c8668f-lxbfv\" (UID: \"53f1cb30-6429-4ebc-8301-5f1de3e70611\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182596 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5f0fd389-9264-494e-a44d-7290896b12b4-srv-cert\") pod \"olm-operator-6b444d44fb-v6fl4\" (UID: \"5f0fd389-9264-494e-a44d-7290896b12b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182610 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50718f10-624f-4611-a5ac-d19a63806946-apiservice-cert\") pod \"packageserver-d55dfcdfc-hccs4\" (UID: \"50718f10-624f-4611-a5ac-d19a63806946\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182638 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9803ca87-d488-4845-b59d-f928fa6e45f6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n7jfd\" (UID: \"9803ca87-d488-4845-b59d-f928fa6e45f6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182662 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5ngd\" (UniqueName: \"kubernetes.io/projected/9aed051f-c6e6-4694-8a2a-065e5dd6efa4-kube-api-access-d5ngd\") pod \"catalog-operator-68c6474976-hwlnd\" (UID: \"9aed051f-c6e6-4694-8a2a-065e5dd6efa4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182678 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhsnw\" (UniqueName: \"kubernetes.io/projected/50718f10-624f-4611-a5ac-d19a63806946-kube-api-access-fhsnw\") pod \"packageserver-d55dfcdfc-hccs4\" (UID: \"50718f10-624f-4611-a5ac-d19a63806946\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182697 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd7lh\" (UniqueName: \"kubernetes.io/projected/5f0fd389-9264-494e-a44d-7290896b12b4-kube-api-access-cd7lh\") pod \"olm-operator-6b444d44fb-v6fl4\" (UID: \"5f0fd389-9264-494e-a44d-7290896b12b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182717 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9aed051f-c6e6-4694-8a2a-065e5dd6efa4-srv-cert\") pod \"catalog-operator-68c6474976-hwlnd\" (UID: \"9aed051f-c6e6-4694-8a2a-065e5dd6efa4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182734 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7ba54a4-6433-42a2-9bb6-2fe7ff1ca1f6-cert\") pod \"ingress-canary-spqzr\" (UID: \"f7ba54a4-6433-42a2-9bb6-2fe7ff1ca1f6\") " pod="openshift-ingress-canary/ingress-canary-spqzr" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182752 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c98d3776-03b4-4c7c-b106-4ca47db60dac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xl68z\" (UID: \"c98d3776-03b4-4c7c-b106-4ca47db60dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182769 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7btkm\" (UniqueName: \"kubernetes.io/projected/f7ba54a4-6433-42a2-9bb6-2fe7ff1ca1f6-kube-api-access-7btkm\") pod \"ingress-canary-spqzr\" (UID: \"f7ba54a4-6433-42a2-9bb6-2fe7ff1ca1f6\") " pod="openshift-ingress-canary/ingress-canary-spqzr" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.182918 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/11d88052-a254-4fc9-ab57-54bee461f27e-csi-data-dir\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.184340 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e09a141-4aca-4102-8161-849997100ca4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-flwsz\" (UID: \"2e09a141-4aca-4102-8161-849997100ca4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.184602 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ce93bf9-c281-45cc-9697-8a6a8eb9d6e0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5kjdm\" (UID: \"7ce93bf9-c281-45cc-9697-8a6a8eb9d6e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5kjdm" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.185549 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/27e7ffa3-46b0-4531-8bc9-45a93d9efafd-signing-cabundle\") pod \"service-ca-9c57cc56f-cqz7t\" (UID: \"27e7ffa3-46b0-4531-8bc9-45a93d9efafd\") " pod="openshift-service-ca/service-ca-9c57cc56f-cqz7t" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.185598 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/53f1cb30-6429-4ebc-8301-5f1de3e70611-images\") pod \"machine-api-operator-5694c8668f-lxbfv\" (UID: \"53f1cb30-6429-4ebc-8301-5f1de3e70611\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.185831 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55013211-6291-4060-b512-07030b99b897-config-volume\") pod \"collect-profiles-29491005-n4n9h\" (UID: \"55013211-6291-4060-b512-07030b99b897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.186512 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9803ca87-d488-4845-b59d-f928fa6e45f6-config\") pod \"kube-controller-manager-operator-78b949d7b-n7jfd\" (UID: \"9803ca87-d488-4845-b59d-f928fa6e45f6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.187044 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53f1cb30-6429-4ebc-8301-5f1de3e70611-config\") pod \"machine-api-operator-5694c8668f-lxbfv\" (UID: \"53f1cb30-6429-4ebc-8301-5f1de3e70611\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.187947 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/11d88052-a254-4fc9-ab57-54bee461f27e-socket-dir\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.188886 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8vzmg" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.191777 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5-metrics-tls\") pod \"dns-default-nhxcw\" (UID: \"d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5\") " pod="openshift-dns/dns-default-nhxcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.192984 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c98d3776-03b4-4c7c-b106-4ca47db60dac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xl68z\" (UID: \"c98d3776-03b4-4c7c-b106-4ca47db60dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.193135 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/11d88052-a254-4fc9-ab57-54bee461f27e-plugins-dir\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.193595 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86c5e568-89ec-459d-bec4-8b2c0f075531-serving-cert\") pod \"service-ca-operator-777779d784-hvmc5\" (UID: \"86c5e568-89ec-459d-bec4-8b2c0f075531\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.193641 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9aed051f-c6e6-4694-8a2a-065e5dd6efa4-profile-collector-cert\") pod \"catalog-operator-68c6474976-hwlnd\" (UID: \"9aed051f-c6e6-4694-8a2a-065e5dd6efa4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.194801 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86c5e568-89ec-459d-bec4-8b2c0f075531-config\") pod \"service-ca-operator-777779d784-hvmc5\" (UID: \"86c5e568-89ec-459d-bec4-8b2c0f075531\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.196525 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/11d88052-a254-4fc9-ab57-54bee461f27e-registration-dir\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.197902 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50718f10-624f-4611-a5ac-d19a63806946-webhook-cert\") pod \"packageserver-d55dfcdfc-hccs4\" (UID: \"50718f10-624f-4611-a5ac-d19a63806946\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.198467 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5-config-volume\") pod \"dns-default-nhxcw\" (UID: \"d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5\") " pod="openshift-dns/dns-default-nhxcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.201016 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c98d3776-03b4-4c7c-b106-4ca47db60dac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xl68z\" (UID: \"c98d3776-03b4-4c7c-b106-4ca47db60dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.208262 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b1684954-64a8-4748-89f9-31c6386cd712-certs\") pod \"machine-config-server-rjk22\" (UID: \"b1684954-64a8-4748-89f9-31c6386cd712\") " pod="openshift-machine-config-operator/machine-config-server-rjk22" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.209517 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.212688 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50718f10-624f-4611-a5ac-d19a63806946-apiservice-cert\") pod \"packageserver-d55dfcdfc-hccs4\" (UID: \"50718f10-624f-4611-a5ac-d19a63806946\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.213241 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9803ca87-d488-4845-b59d-f928fa6e45f6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n7jfd\" (UID: \"9803ca87-d488-4845-b59d-f928fa6e45f6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.213262 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5f0fd389-9264-494e-a44d-7290896b12b4-srv-cert\") pod \"olm-operator-6b444d44fb-v6fl4\" (UID: \"5f0fd389-9264-494e-a44d-7290896b12b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.213451 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/50718f10-624f-4611-a5ac-d19a63806946-tmpfs\") pod \"packageserver-d55dfcdfc-hccs4\" (UID: \"50718f10-624f-4611-a5ac-d19a63806946\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.213516 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkp48\" (UniqueName: \"kubernetes.io/projected/a8acf30a-687a-409e-a4ee-57d340449932-kube-api-access-qkp48\") pod \"openshift-apiserver-operator-796bbdcf4f-h9sjz\" (UID: \"a8acf30a-687a-409e-a4ee-57d340449932\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.214293 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5f0fd389-9264-494e-a44d-7290896b12b4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-v6fl4\" (UID: \"5f0fd389-9264-494e-a44d-7290896b12b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.214680 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/53f1cb30-6429-4ebc-8301-5f1de3e70611-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lxbfv\" (UID: \"53f1cb30-6429-4ebc-8301-5f1de3e70611\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.216140 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcf5a119-adec-45a7-bd1d-758f6c1d62ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-h5tpt\" (UID: \"fcf5a119-adec-45a7-bd1d-758f6c1d62ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.218897 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/27e7ffa3-46b0-4531-8bc9-45a93d9efafd-signing-key\") pod \"service-ca-9c57cc56f-cqz7t\" (UID: \"27e7ffa3-46b0-4531-8bc9-45a93d9efafd\") " pod="openshift-service-ca/service-ca-9c57cc56f-cqz7t" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.220014 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7ba54a4-6433-42a2-9bb6-2fe7ff1ca1f6-cert\") pod \"ingress-canary-spqzr\" (UID: \"f7ba54a4-6433-42a2-9bb6-2fe7ff1ca1f6\") " pod="openshift-ingress-canary/ingress-canary-spqzr" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.221075 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fgc6\" (UniqueName: \"kubernetes.io/projected/8086b1ce-02cb-465d-9191-ce5af96d2f7a-kube-api-access-7fgc6\") pod \"authentication-operator-69f744f599-4v7dp\" (UID: \"8086b1ce-02cb-465d-9191-ce5af96d2f7a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.221220 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55013211-6291-4060-b512-07030b99b897-secret-volume\") pod \"collect-profiles-29491005-n4n9h\" (UID: \"55013211-6291-4060-b512-07030b99b897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.225055 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b1684954-64a8-4748-89f9-31c6386cd712-node-bootstrap-token\") pod \"machine-config-server-rjk22\" (UID: \"b1684954-64a8-4748-89f9-31c6386cd712\") " pod="openshift-machine-config-operator/machine-config-server-rjk22" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.229088 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9aed051f-c6e6-4694-8a2a-065e5dd6efa4-srv-cert\") pod \"catalog-operator-68c6474976-hwlnd\" (UID: \"9aed051f-c6e6-4694-8a2a-065e5dd6efa4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.231567 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h52pl\" (UniqueName: \"kubernetes.io/projected/fd50c155-e6f3-437e-bd5a-672325cf782c-kube-api-access-h52pl\") pod \"kube-storage-version-migrator-operator-b67b599dd-cctdv\" (UID: \"fd50c155-e6f3-437e-bd5a-672325cf782c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.258982 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.270266 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.284590 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: E0126 20:57:39.285133 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:39.785120263 +0000 UTC m=+149.166708300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.286007 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-bound-sa-token\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.293690 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6"] Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.294133 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95sjt\" (UniqueName: \"kubernetes.io/projected/79214ec9-11ec-4a5c-bfee-59ebe2caeeea-kube-api-access-95sjt\") pod \"multus-admission-controller-857f4d67dd-7jwnb\" (UID: \"79214ec9-11ec-4a5c-bfee-59ebe2caeeea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7jwnb" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.301409 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7jwnb" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.308381 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wljn2\" (UniqueName: \"kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-kube-api-access-wljn2\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.321205 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz562\" (UniqueName: \"kubernetes.io/projected/c60bec7a-6571-4594-a05f-4603f5959477-kube-api-access-rz562\") pod \"machine-config-operator-74547568cd-t2bcw\" (UID: \"c60bec7a-6571-4594-a05f-4603f5959477\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.326213 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7skd"] Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.342290 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5800f5da-f007-4a93-ab2b-97912d369526-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cjdkv\" (UID: \"5800f5da-f007-4a93-ab2b-97912d369526\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.347312 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71935a90-1ee3-448e-a8f6-7a370ef7062c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmtk7\" (UID: \"71935a90-1ee3-448e-a8f6-7a370ef7062c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.366818 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d27v6\" (UniqueName: \"kubernetes.io/projected/bd047a32-c6c9-4376-a82b-514d2bfede44-kube-api-access-d27v6\") pod \"machine-config-controller-84d6567774-68k5w\" (UID: \"bd047a32-c6c9-4376-a82b-514d2bfede44\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.385328 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:39 crc kubenswrapper[4899]: E0126 20:57:39.385808 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:39.885780088 +0000 UTC m=+149.267368125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.387517 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgm77\" (UniqueName: \"kubernetes.io/projected/7390da62-a9fa-495a-8d5a-ed2c660337cf-kube-api-access-vgm77\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqmc2\" (UID: \"7390da62-a9fa-495a-8d5a-ed2c660337cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.391535 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.435055 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.464634 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzzm7\" (UniqueName: \"kubernetes.io/projected/27e7ffa3-46b0-4531-8bc9-45a93d9efafd-kube-api-access-vzzm7\") pod \"service-ca-9c57cc56f-cqz7t\" (UID: \"27e7ffa3-46b0-4531-8bc9-45a93d9efafd\") " pod="openshift-service-ca/service-ca-9c57cc56f-cqz7t" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.464733 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2ltv\" (UniqueName: \"kubernetes.io/projected/55013211-6291-4060-b512-07030b99b897-kube-api-access-b2ltv\") pod \"collect-profiles-29491005-n4n9h\" (UID: \"55013211-6291-4060-b512-07030b99b897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.479213 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.486728 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7btkm\" (UniqueName: \"kubernetes.io/projected/f7ba54a4-6433-42a2-9bb6-2fe7ff1ca1f6-kube-api-access-7btkm\") pod \"ingress-canary-spqzr\" (UID: \"f7ba54a4-6433-42a2-9bb6-2fe7ff1ca1f6\") " pod="openshift-ingress-canary/ingress-canary-spqzr" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.487572 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: E0126 20:57:39.487980 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:39.987964762 +0000 UTC m=+149.369552799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.505420 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.517441 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.524740 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.532654 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5ngd\" (UniqueName: \"kubernetes.io/projected/9aed051f-c6e6-4694-8a2a-065e5dd6efa4-kube-api-access-d5ngd\") pod \"catalog-operator-68c6474976-hwlnd\" (UID: \"9aed051f-c6e6-4694-8a2a-065e5dd6efa4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.537462 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj4zd\" (UniqueName: \"kubernetes.io/projected/b1684954-64a8-4748-89f9-31c6386cd712-kube-api-access-dj4zd\") pod \"machine-config-server-rjk22\" (UID: \"b1684954-64a8-4748-89f9-31c6386cd712\") " pod="openshift-machine-config-operator/machine-config-server-rjk22" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.538185 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmhx5\" (UniqueName: \"kubernetes.io/projected/d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5-kube-api-access-bmhx5\") pod \"dns-default-nhxcw\" (UID: \"d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5\") " pod="openshift-dns/dns-default-nhxcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.561882 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.566639 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.567336 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksmjr\" (UniqueName: \"kubernetes.io/projected/11d88052-a254-4fc9-ab57-54bee461f27e-kube-api-access-ksmjr\") pod \"csi-hostpathplugin-bnw9c\" (UID: \"11d88052-a254-4fc9-ab57-54bee461f27e\") " pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.567854 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll9pd\" (UniqueName: \"kubernetes.io/projected/6ba045ff-8d96-4b64-819d-9de471453463-kube-api-access-ll9pd\") pod \"migrator-59844c95c7-plssq\" (UID: \"6ba045ff-8d96-4b64-819d-9de471453463\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plssq" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.579309 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.586292 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f46s\" (UniqueName: \"kubernetes.io/projected/c98d3776-03b4-4c7c-b106-4ca47db60dac-kube-api-access-4f46s\") pod \"marketplace-operator-79b997595-xl68z\" (UID: \"c98d3776-03b4-4c7c-b106-4ca47db60dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.588304 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:39 crc kubenswrapper[4899]: E0126 20:57:39.588651 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:40.088634827 +0000 UTC m=+149.470222854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.593991 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.607261 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plssq" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.614614 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cqz7t" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.645647 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpfgc\" (UniqueName: \"kubernetes.io/projected/53f1cb30-6429-4ebc-8301-5f1de3e70611-kube-api-access-dpfgc\") pod \"machine-api-operator-5694c8668f-lxbfv\" (UID: \"53f1cb30-6429-4ebc-8301-5f1de3e70611\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.646688 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzwkj\" (UniqueName: \"kubernetes.io/projected/7ce93bf9-c281-45cc-9697-8a6a8eb9d6e0-kube-api-access-nzwkj\") pod \"control-plane-machine-set-operator-78cbb6b69f-5kjdm\" (UID: \"7ce93bf9-c281-45cc-9697-8a6a8eb9d6e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5kjdm" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.649967 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.652240 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9hwv\" (UniqueName: \"kubernetes.io/projected/fcf5a119-adec-45a7-bd1d-758f6c1d62ac-kube-api-access-k9hwv\") pod \"package-server-manager-789f6589d5-h5tpt\" (UID: \"fcf5a119-adec-45a7-bd1d-758f6c1d62ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.654254 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.661238 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rjk22" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.668122 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msvxf\" (UniqueName: \"kubernetes.io/projected/86c5e568-89ec-459d-bec4-8b2c0f075531-kube-api-access-msvxf\") pod \"service-ca-operator-777779d784-hvmc5\" (UID: \"86c5e568-89ec-459d-bec4-8b2c0f075531\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.668335 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-spqzr" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.690148 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: E0126 20:57:39.690506 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:40.190494431 +0000 UTC m=+149.572082458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.696503 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhsnw\" (UniqueName: \"kubernetes.io/projected/50718f10-624f-4611-a5ac-d19a63806946-kube-api-access-fhsnw\") pod \"packageserver-d55dfcdfc-hccs4\" (UID: \"50718f10-624f-4611-a5ac-d19a63806946\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.701848 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.706477 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nhxcw" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.718629 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd7lh\" (UniqueName: \"kubernetes.io/projected/5f0fd389-9264-494e-a44d-7290896b12b4-kube-api-access-cd7lh\") pod \"olm-operator-6b444d44fb-v6fl4\" (UID: \"5f0fd389-9264-494e-a44d-7290896b12b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.719485 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jtwht" event={"ID":"e74d40cc-592a-4c5b-9f56-70f5657fc787","Type":"ContainerStarted","Data":"c7fcadb30cb4a3a30a9fdcb23dc29f56c37be83b8e1b345ae87bf8b321ea35ae"} Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.719527 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jtwht" event={"ID":"e74d40cc-592a-4c5b-9f56-70f5657fc787","Type":"ContainerStarted","Data":"fa02cb1a002d8034e61392748cfb14b937eb55f4d4e9620d0af7f39a6a0c39a3"} Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.760759 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9803ca87-d488-4845-b59d-f928fa6e45f6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n7jfd\" (UID: \"9803ca87-d488-4845-b59d-f928fa6e45f6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.784094 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9tfhr" event={"ID":"997e7432-e74d-4f39-accd-a85b98f21978","Type":"ContainerStarted","Data":"7b7e99bf2c8148fd6318ae98c2504ec7ff22af7118f097d5bb0fa73914977838"} Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.784132 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9tfhr" event={"ID":"997e7432-e74d-4f39-accd-a85b98f21978","Type":"ContainerStarted","Data":"c7318f8f45ab31d9b085f38602e7b4c6cb94906e4a531a32e2244fa31f847ea0"} Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.806663 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:39 crc kubenswrapper[4899]: E0126 20:57:39.818061 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:40.318033943 +0000 UTC m=+149.699621980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.838794 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" event={"ID":"ac40e6bb-0637-4926-af5e-3ab0f13e0449","Type":"ContainerStarted","Data":"f6035d66a71fc892fe25030a79ab1fb7bd9569fae3e750058365a74583a408db"} Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.838835 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" event={"ID":"ac40e6bb-0637-4926-af5e-3ab0f13e0449","Type":"ContainerStarted","Data":"03f72d7a3bea23762b3304ed12d67275d23eb1821418ce4a6d4508a1daba180b"} Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.845149 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" event={"ID":"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8","Type":"ContainerStarted","Data":"0987c8f6a2da2ab84af4493c4c04954612796e59d21ae325fad619bc212eaa0c"} Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.874188 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5kjdm" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.886385 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.908095 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:39 crc kubenswrapper[4899]: E0126 20:57:39.909031 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:40.409019918 +0000 UTC m=+149.790607955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.920789 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.927938 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.934138 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" Jan 26 20:57:39 crc kubenswrapper[4899]: I0126 20:57:39.940869 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt" Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.009617 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:40 crc kubenswrapper[4899]: E0126 20:57:40.009874 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:40.509860919 +0000 UTC m=+149.891448956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.110685 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:40 crc kubenswrapper[4899]: E0126 20:57:40.111119 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:40.611107343 +0000 UTC m=+149.992695380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.212768 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:40 crc kubenswrapper[4899]: E0126 20:57:40.213075 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:40.713058619 +0000 UTC m=+150.094646656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.229266 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9"] Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.315039 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:40 crc kubenswrapper[4899]: E0126 20:57:40.315658 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:40.815644885 +0000 UTC m=+150.197232922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.368738 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d"] Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.379474 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pv9sc"] Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.394907 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw"] Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.416630 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:40 crc kubenswrapper[4899]: E0126 20:57:40.417022 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:40.917005252 +0000 UTC m=+150.298593289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.478476 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" podStartSLOduration=129.478455746 podStartE2EDuration="2m9.478455746s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:40.47823753 +0000 UTC m=+149.859825567" watchObservedRunningTime="2026-01-26 20:57:40.478455746 +0000 UTC m=+149.860043783" Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.518376 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:40 crc kubenswrapper[4899]: E0126 20:57:40.518685 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:41.018673279 +0000 UTC m=+150.400261316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.622630 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:40 crc kubenswrapper[4899]: E0126 20:57:40.622977 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:41.122961478 +0000 UTC m=+150.504549515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:40 crc kubenswrapper[4899]: W0126 20:57:40.675376 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfa0ed4a_5d9c_4b54_b733_9a133db47307.slice/crio-ae791f8a611ba20ab568f7ad214998d9cec093e434c447a6667a8b0b92f9bd52 WatchSource:0}: Error finding container ae791f8a611ba20ab568f7ad214998d9cec093e434c447a6667a8b0b92f9bd52: Status 404 returned error can't find the container with id ae791f8a611ba20ab568f7ad214998d9cec093e434c447a6667a8b0b92f9bd52 Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.731544 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:40 crc kubenswrapper[4899]: E0126 20:57:40.732016 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:41.232003435 +0000 UTC m=+150.613591472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.779249 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.835500 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:40 crc kubenswrapper[4899]: E0126 20:57:40.835863 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:41.335841889 +0000 UTC m=+150.717429926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.885652 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" event={"ID":"ac40e6bb-0637-4926-af5e-3ab0f13e0449","Type":"ContainerStarted","Data":"97b71b1350cc3c483a7479ee873db09337fdcddef7978f2af5d2b41904ca7174"} Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.906708 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pv9sc" event={"ID":"cafb1f49-e8d2-42bb-b00c-c069f86db12c","Type":"ContainerStarted","Data":"57992fe5d3a6f2de7ffb23d7284f6e2acf4bf547a44c3a5f51401c5e004b68e2"} Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.909144 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" event={"ID":"961c22be-2ec9-4136-a19a-191ca2eab35b","Type":"ContainerStarted","Data":"a2b0b99fd0a80e8696da680e258740c2070b0f6319b93261ee8084025b1e7245"} Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.926603 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7skd" event={"ID":"c7e4ee5d-670e-40e6-9102-e9766a492381","Type":"ContainerStarted","Data":"c3145e3a2c6a3b32e5d9267b61f45c8fb8688b4dd86e9abac9f9d4cf5bbae5b3"} Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.926641 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7skd" event={"ID":"c7e4ee5d-670e-40e6-9102-e9766a492381","Type":"ContainerStarted","Data":"a3ed7bb7b4605028bba5a47750379e537f3bbce622f33c99a98fc4dd538e7bfb"} Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.928528 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rjk22" event={"ID":"b1684954-64a8-4748-89f9-31c6386cd712","Type":"ContainerStarted","Data":"da0062034f521b75030214a61008fb5dc9ba45545f55bc502f51e6398b0e74f9"} Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.928552 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rjk22" event={"ID":"b1684954-64a8-4748-89f9-31c6386cd712","Type":"ContainerStarted","Data":"b53af674dada373cd3bcb8c35e5cf9e79745e8925a41a541cf53d1abb23b4cd1"} Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.933860 4899 generic.go:334] "Generic (PLEG): container finished" podID="d605cb6d-937c-45d9-bc80-3a7b7ac58ca8" containerID="e30a6db82325d5e6977aba79b5470276dddb5c56f3788677326241f634864296" exitCode=0 Jan 26 20:57:40 crc kubenswrapper[4899]: I0126 20:57:40.937469 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:40 crc kubenswrapper[4899]: E0126 20:57:40.938983 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:41.438966552 +0000 UTC m=+150.820554589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.029162 4899 csr.go:261] certificate signing request csr-lqrbx is approved, waiting to be issued Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.035675 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" event={"ID":"cfa0ed4a-5d9c-4b54-b733-9a133db47307","Type":"ContainerStarted","Data":"ae791f8a611ba20ab568f7ad214998d9cec093e434c447a6667a8b0b92f9bd52"} Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.035707 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" event={"ID":"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2","Type":"ContainerStarted","Data":"4c1bbbc6cdcff3dcc9725ba7d3e88e3dba5a265f6a5e378631e16c83c22ea22e"} Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.035717 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" event={"ID":"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8","Type":"ContainerDied","Data":"e30a6db82325d5e6977aba79b5470276dddb5c56f3788677326241f634864296"} Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.038482 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.046134 4899 csr.go:257] certificate signing request csr-lqrbx is issued Jan 26 20:57:41 crc kubenswrapper[4899]: E0126 20:57:41.048273 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:41.548239166 +0000 UTC m=+150.929827203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.084725 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-9tfhr" podStartSLOduration=129.084698191 podStartE2EDuration="2m9.084698191s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:41.082441051 +0000 UTC m=+150.464029088" watchObservedRunningTime="2026-01-26 20:57:41.084698191 +0000 UTC m=+150.466286228" Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.140112 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:41 crc kubenswrapper[4899]: E0126 20:57:41.142353 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:41.642338327 +0000 UTC m=+151.023926354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.170445 4899 patch_prober.go:28] interesting pod/router-default-5444994796-9tfhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 20:57:41 crc kubenswrapper[4899]: [-]has-synced failed: reason withheld Jan 26 20:57:41 crc kubenswrapper[4899]: [+]process-running ok Jan 26 20:57:41 crc kubenswrapper[4899]: healthz check failed Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.170512 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9tfhr" podUID="997e7432-e74d-4f39-accd-a85b98f21978" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.210074 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jtwht" podStartSLOduration=130.210056956 podStartE2EDuration="2m10.210056956s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:41.208767316 +0000 UTC m=+150.590355353" watchObservedRunningTime="2026-01-26 20:57:41.210056956 +0000 UTC m=+150.591644983" Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.248460 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:41 crc kubenswrapper[4899]: E0126 20:57:41.248944 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:41.748900477 +0000 UTC m=+151.130488524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.278370 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-rjk22" podStartSLOduration=5.278351614 podStartE2EDuration="5.278351614s" podCreationTimestamp="2026-01-26 20:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:41.276410453 +0000 UTC m=+150.657998500" watchObservedRunningTime="2026-01-26 20:57:41.278351614 +0000 UTC m=+150.659939651" Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.296784 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rq8lx"] Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.303915 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jdxz6"] Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.350315 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:41 crc kubenswrapper[4899]: E0126 20:57:41.351031 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:41.851016997 +0000 UTC m=+151.232605024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.436140 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9dm9b" podStartSLOduration=130.436121519 podStartE2EDuration="2m10.436121519s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:41.435758167 +0000 UTC m=+150.817346204" watchObservedRunningTime="2026-01-26 20:57:41.436121519 +0000 UTC m=+150.817709556" Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.451422 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:41 crc kubenswrapper[4899]: E0126 20:57:41.451590 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:41.95156494 +0000 UTC m=+151.333152977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.451837 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:41 crc kubenswrapper[4899]: E0126 20:57:41.452167 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:41.952159798 +0000 UTC m=+151.333747825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:41 crc kubenswrapper[4899]: W0126 20:57:41.494263 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f4c4b4c_b67d_43b5_bf4a_6cfee043ba48.slice/crio-44ae4c78714a812d75f43b108b163d8a7bf11460ec203bccf1c0fcf97b3c4d70 WatchSource:0}: Error finding container 44ae4c78714a812d75f43b108b163d8a7bf11460ec203bccf1c0fcf97b3c4d70: Status 404 returned error can't find the container with id 44ae4c78714a812d75f43b108b163d8a7bf11460ec203bccf1c0fcf97b3c4d70 Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.555044 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:41 crc kubenswrapper[4899]: E0126 20:57:41.555355 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:42.055326852 +0000 UTC m=+151.436914889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.558505 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zt55n"] Jan 26 20:57:41 crc kubenswrapper[4899]: W0126 20:57:41.592658 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedfc8cc3_a964_4fa5_9ddf_fb15d33a236b.slice/crio-336051609d8625336c4b3f56c9fb3ad7ca5a6e6433e834bb68b439b348d60e7e WatchSource:0}: Error finding container 336051609d8625336c4b3f56c9fb3ad7ca5a6e6433e834bb68b439b348d60e7e: Status 404 returned error can't find the container with id 336051609d8625336c4b3f56c9fb3ad7ca5a6e6433e834bb68b439b348d60e7e Jan 26 20:57:41 crc kubenswrapper[4899]: W0126 20:57:41.644843 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-c3267adfb7adf31c55045226dee86887e8091ea40bf397dd3c1e7e2fc2574aa8 WatchSource:0}: Error finding container c3267adfb7adf31c55045226dee86887e8091ea40bf397dd3c1e7e2fc2574aa8: Status 404 returned error can't find the container with id c3267adfb7adf31c55045226dee86887e8091ea40bf397dd3c1e7e2fc2574aa8 Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.660552 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:41 crc kubenswrapper[4899]: E0126 20:57:41.660856 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:42.160844719 +0000 UTC m=+151.542432756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.723292 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jsrd8"] Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.764506 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:41 crc kubenswrapper[4899]: E0126 20:57:41.764806 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:42.264778597 +0000 UTC m=+151.646366644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.785700 4899 patch_prober.go:28] interesting pod/router-default-5444994796-9tfhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 20:57:41 crc kubenswrapper[4899]: [-]has-synced failed: reason withheld Jan 26 20:57:41 crc kubenswrapper[4899]: [+]process-running ok Jan 26 20:57:41 crc kubenswrapper[4899]: healthz check failed Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.785757 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9tfhr" podUID="997e7432-e74d-4f39-accd-a85b98f21978" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.865990 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:41 crc kubenswrapper[4899]: E0126 20:57:41.866682 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:42.366669091 +0000 UTC m=+151.748257128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.975066 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:41 crc kubenswrapper[4899]: E0126 20:57:41.975327 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:42.475313205 +0000 UTC m=+151.856901242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:41 crc kubenswrapper[4899]: I0126 20:57:41.995748 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" event={"ID":"d605cb6d-937c-45d9-bc80-3a7b7ac58ca8","Type":"ContainerStarted","Data":"20095779c709ff8c1dee9ac4dc24065d21dac0de1bd133b18ffd9844b6d0480f"} Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:41.999668 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pv9sc" event={"ID":"cafb1f49-e8d2-42bb-b00c-c069f86db12c","Type":"ContainerStarted","Data":"7dc729d6f343232499e098632940d4aa25b4357664b704a4601507ce51687ad9"} Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:41.999690 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pv9sc" event={"ID":"cafb1f49-e8d2-42bb-b00c-c069f86db12c","Type":"ContainerStarted","Data":"b58170fb151c031e486561ae1d8b5cc9e75e5075d88d1417719ccdb12aa89afd"} Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.003503 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" event={"ID":"22530841-f07a-4811-bbdf-9964a1818e16","Type":"ContainerStarted","Data":"b92c6e27d8f4b907d730fe8c00268925ba56d63c5d52bb83e2765c697ff98114"} Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.007250 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7skd" event={"ID":"c7e4ee5d-670e-40e6-9102-e9766a492381","Type":"ContainerStarted","Data":"94695df52bd1f052a6d777c716e0de67146ac21b7003e256d9b7870f748c52fd"} Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.024566 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c3267adfb7adf31c55045226dee86887e8091ea40bf397dd3c1e7e2fc2574aa8"} Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.041061 4899 generic.go:334] "Generic (PLEG): container finished" podID="cfa0ed4a-5d9c-4b54-b733-9a133db47307" containerID="0b8c37d4af4943f207e9f371c5177363a9908e6443711a14f8ddb53b8c9137d2" exitCode=0 Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.041179 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" event={"ID":"cfa0ed4a-5d9c-4b54-b733-9a133db47307","Type":"ContainerDied","Data":"0b8c37d4af4943f207e9f371c5177363a9908e6443711a14f8ddb53b8c9137d2"} Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.047227 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 20:52:41 +0000 UTC, rotation deadline is 2026-10-14 01:36:28.576717549 +0000 UTC Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.048594 4899 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6244h38m46.528133522s for next certificate rotation Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.072868 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" event={"ID":"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2","Type":"ContainerStarted","Data":"1314f8a8a71ab1ee786381ba969e0c0ec47c79a1501df3e5d3550ec23d583e7c"} Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.075293 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.085618 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:42 crc kubenswrapper[4899]: E0126 20:57:42.086390 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:42.586368634 +0000 UTC m=+151.967956671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.090823 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" podStartSLOduration=130.090793862 podStartE2EDuration="2m10.090793862s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:42.048421862 +0000 UTC m=+151.430009899" watchObservedRunningTime="2026-01-26 20:57:42.090793862 +0000 UTC m=+151.472381889" Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.111295 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.113530 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl68z"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.123419 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jdxz6" event={"ID":"0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48","Type":"ContainerStarted","Data":"55bfcb99ed87aed854168d9eef7578c9b4083036f4f6e8113687f17163b8a8d0"} Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.125222 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-jdxz6" Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.125318 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jdxz6" event={"ID":"0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48","Type":"ContainerStarted","Data":"44ae4c78714a812d75f43b108b163d8a7bf11460ec203bccf1c0fcf97b3c4d70"} Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.127250 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.130374 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.134571 4899 patch_prober.go:28] interesting pod/console-operator-58897d9998-jdxz6 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.134602 4899 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jdxz6" podUID="0f4c4b4c-b67d-43b5-bf4a-6cfee043ba48" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.137700 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lxbfv"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.140210 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.145143 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.146246 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7skd" podStartSLOduration=131.146226039 podStartE2EDuration="2m11.146226039s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:42.141014407 +0000 UTC m=+151.522602444" watchObservedRunningTime="2026-01-26 20:57:42.146226039 +0000 UTC m=+151.527814076" Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.148819 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.180315 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" event={"ID":"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b","Type":"ContainerStarted","Data":"336051609d8625336c4b3f56c9fb3ad7ca5a6e6433e834bb68b439b348d60e7e"} Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.186558 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:42 crc kubenswrapper[4899]: E0126 20:57:42.188085 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:42.688068913 +0000 UTC m=+152.069656950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:42 crc kubenswrapper[4899]: W0126 20:57:42.194869 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd047a32_c6c9_4376_a82b_514d2bfede44.slice/crio-7a8a1a19ccd04efe052e7b8de9d4d692f3e1dbf4724d30f722931a88b770aebb WatchSource:0}: Error finding container 7a8a1a19ccd04efe052e7b8de9d4d692f3e1dbf4724d30f722931a88b770aebb: Status 404 returned error can't find the container with id 7a8a1a19ccd04efe052e7b8de9d4d692f3e1dbf4724d30f722931a88b770aebb Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.198577 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8vzmg"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.198613 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jsrd8" event={"ID":"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc","Type":"ContainerStarted","Data":"a16b46900902fcb4d277435b3f21c4188a5bab64e4645e91f6d01fe1f61af335"} Jan 26 20:57:42 crc kubenswrapper[4899]: W0126 20:57:42.207692 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd50c155_e6f3_437e_bd5a_672325cf782c.slice/crio-55ae0e6dff78fe2d26c426b679f945f2c5c91b62ef902bc6e21452ca9d15e1d7 WatchSource:0}: Error finding container 55ae0e6dff78fe2d26c426b679f945f2c5c91b62ef902bc6e21452ca9d15e1d7: Status 404 returned error can't find the container with id 55ae0e6dff78fe2d26c426b679f945f2c5c91b62ef902bc6e21452ca9d15e1d7 Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.210537 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bnw9c"] Jan 26 20:57:42 crc kubenswrapper[4899]: W0126 20:57:42.228619 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71935a90_1ee3_448e_a8f6_7a370ef7062c.slice/crio-2a068941a4387d977075b24eed585ae723b3c6801bb397152a6e6d634aa3b00d WatchSource:0}: Error finding container 2a068941a4387d977075b24eed585ae723b3c6801bb397152a6e6d634aa3b00d: Status 404 returned error can't find the container with id 2a068941a4387d977075b24eed585ae723b3c6801bb397152a6e6d634aa3b00d Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.232018 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4v7dp"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.236165 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cqz7t"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.236625 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-pv9sc" podStartSLOduration=131.236614685 podStartE2EDuration="2m11.236614685s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:42.164941312 +0000 UTC m=+151.546529349" watchObservedRunningTime="2026-01-26 20:57:42.236614685 +0000 UTC m=+151.618202712" Jan 26 20:57:42 crc kubenswrapper[4899]: W0126 20:57:42.246851 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79214ec9_11ec_4a5c_bfee_59ebe2caeeea.slice/crio-708a537b81962b260728b2452f63ad36b0a3fe840d2145ced9a460e89ff40fc3 WatchSource:0}: Error finding container 708a537b81962b260728b2452f63ad36b0a3fe840d2145ced9a460e89ff40fc3: Status 404 returned error can't find the container with id 708a537b81962b260728b2452f63ad36b0a3fe840d2145ced9a460e89ff40fc3 Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.253400 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-spqzr"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.258985 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7jwnb"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.263545 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.267744 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5kjdm"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.267781 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" event={"ID":"961c22be-2ec9-4136-a19a-191ca2eab35b","Type":"ContainerStarted","Data":"654827816ba843d19a0621aab62a947217a287443ccbb4fa076620fe929d31fd"} Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.267796 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" event={"ID":"961c22be-2ec9-4136-a19a-191ca2eab35b","Type":"ContainerStarted","Data":"97ec031ced1ea687a4432802b9e32b29ced2b25ab10b1ae8747617d3e8c1d6c3"} Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.267809 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.268444 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt"] Jan 26 20:57:42 crc kubenswrapper[4899]: W0126 20:57:42.269003 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7ba54a4_6433_42a2_9bb6_2fe7ff1ca1f6.slice/crio-20ed1f07495b0acd995ae917aa4496e5ba411f137a7feb91aee6a9fab261a059 WatchSource:0}: Error finding container 20ed1f07495b0acd995ae917aa4496e5ba411f137a7feb91aee6a9fab261a059: Status 404 returned error can't find the container with id 20ed1f07495b0acd995ae917aa4496e5ba411f137a7feb91aee6a9fab261a059 Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.269285 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" podStartSLOduration=130.269259722 podStartE2EDuration="2m10.269259722s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:42.201435659 +0000 UTC m=+151.583023696" watchObservedRunningTime="2026-01-26 20:57:42.269259722 +0000 UTC m=+151.650847759" Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.270958 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2b9a16f044f9bfb60073d6ee5fe7023d010fa4217051412afad88f7cbaeaa7d8"} Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.271727 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.278653 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.278710 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.278721 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.279331 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-jdxz6" podStartSLOduration=131.279320095 podStartE2EDuration="2m11.279320095s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:42.233220249 +0000 UTC m=+151.614808286" watchObservedRunningTime="2026-01-26 20:57:42.279320095 +0000 UTC m=+151.660908132" Jan 26 20:57:42 crc kubenswrapper[4899]: W0126 20:57:42.287025 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9803ca87_d488_4845_b59d_f928fa6e45f6.slice/crio-712589a2c037edbc36f5ad8fd464d7d323953c94f3cee3240b8c58f6ead8ff07 WatchSource:0}: Error finding container 712589a2c037edbc36f5ad8fd464d7d323953c94f3cee3240b8c58f6ead8ff07: Status 404 returned error can't find the container with id 712589a2c037edbc36f5ad8fd464d7d323953c94f3cee3240b8c58f6ead8ff07 Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.288268 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:42 crc kubenswrapper[4899]: E0126 20:57:42.288578 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:42.788566803 +0000 UTC m=+152.170154840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.287909 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96lmw" podStartSLOduration=131.287894432 podStartE2EDuration="2m11.287894432s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:42.285772706 +0000 UTC m=+151.667360743" watchObservedRunningTime="2026-01-26 20:57:42.287894432 +0000 UTC m=+151.669482469" Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.304422 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.333566 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nhxcw"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.389789 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:42 crc kubenswrapper[4899]: E0126 20:57:42.390037 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:42.890011573 +0000 UTC m=+152.271599610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.390193 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:42 crc kubenswrapper[4899]: E0126 20:57:42.390726 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:42.890709325 +0000 UTC m=+152.272297362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.414731 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.444496 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.445846 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.453183 4899 patch_prober.go:28] interesting pod/apiserver-76f77b778f-jtwht container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 20:57:42 crc kubenswrapper[4899]: [+]log ok Jan 26 20:57:42 crc kubenswrapper[4899]: [+]etcd ok Jan 26 20:57:42 crc kubenswrapper[4899]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 20:57:42 crc kubenswrapper[4899]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 20:57:42 crc kubenswrapper[4899]: [+]poststarthook/max-in-flight-filter ok Jan 26 20:57:42 crc kubenswrapper[4899]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 20:57:42 crc kubenswrapper[4899]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 20:57:42 crc kubenswrapper[4899]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 20:57:42 crc kubenswrapper[4899]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 26 20:57:42 crc kubenswrapper[4899]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 20:57:42 crc kubenswrapper[4899]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 20:57:42 crc kubenswrapper[4899]: [+]poststarthook/openshift.io-startinformers ok Jan 26 20:57:42 crc kubenswrapper[4899]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 20:57:42 crc kubenswrapper[4899]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 20:57:42 crc kubenswrapper[4899]: livez check failed Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.453273 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-jtwht" podUID="e74d40cc-592a-4c5b-9f56-70f5657fc787" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.472236 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-plssq"] Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.494895 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:42 crc kubenswrapper[4899]: E0126 20:57:42.495234 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:42.995219381 +0000 UTC m=+152.376807418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.496561 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4"] Jan 26 20:57:42 crc kubenswrapper[4899]: W0126 20:57:42.582167 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f0fd389_9264_494e_a44d_7290896b12b4.slice/crio-2c1e44c3a433f0a34110d7a8add1726e4fe419af00d0f9d577f806457f1d0ead WatchSource:0}: Error finding container 2c1e44c3a433f0a34110d7a8add1726e4fe419af00d0f9d577f806457f1d0ead: Status 404 returned error can't find the container with id 2c1e44c3a433f0a34110d7a8add1726e4fe419af00d0f9d577f806457f1d0ead Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.596609 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:42 crc kubenswrapper[4899]: E0126 20:57:42.598739 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:43.098606131 +0000 UTC m=+152.480194168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.698019 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:42 crc kubenswrapper[4899]: E0126 20:57:42.698291 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:43.198264676 +0000 UTC m=+152.579852713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.698631 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:42 crc kubenswrapper[4899]: E0126 20:57:42.699224 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:43.199208535 +0000 UTC m=+152.580796582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.780136 4899 patch_prober.go:28] interesting pod/router-default-5444994796-9tfhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 20:57:42 crc kubenswrapper[4899]: [-]has-synced failed: reason withheld Jan 26 20:57:42 crc kubenswrapper[4899]: [+]process-running ok Jan 26 20:57:42 crc kubenswrapper[4899]: healthz check failed Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.780188 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9tfhr" podUID="997e7432-e74d-4f39-accd-a85b98f21978" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.801675 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:42 crc kubenswrapper[4899]: E0126 20:57:42.802077 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:43.302058598 +0000 UTC m=+152.683646635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:42 crc kubenswrapper[4899]: I0126 20:57:42.905170 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:42 crc kubenswrapper[4899]: E0126 20:57:42.905581 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:43.405567222 +0000 UTC m=+152.787155259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.008559 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:43 crc kubenswrapper[4899]: E0126 20:57:43.008990 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:43.508969513 +0000 UTC m=+152.890557550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.109855 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:43 crc kubenswrapper[4899]: E0126 20:57:43.110273 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:43.610256039 +0000 UTC m=+152.991844076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.210720 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:43 crc kubenswrapper[4899]: E0126 20:57:43.211142 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:43.711118701 +0000 UTC m=+153.092706738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.313052 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:43 crc kubenswrapper[4899]: E0126 20:57:43.313678 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:43.813608533 +0000 UTC m=+153.195196570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.347731 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" event={"ID":"50718f10-624f-4611-a5ac-d19a63806946","Type":"ContainerStarted","Data":"25c26486ef4502d22796e16c4df6fe4c246f88e8fe2c297859c3648e4da20910"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.366967 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jsrd8" event={"ID":"d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc","Type":"ContainerStarted","Data":"b1e427309d8eb6aacfaa2e521227a8632b35cb53f4233a24e2e0bc77887fd711"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.371559 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" event={"ID":"55013211-6291-4060-b512-07030b99b897","Type":"ContainerStarted","Data":"9f0107aabdae5d97e35007af673083e7e086035351357f60ce19152b2cba3840"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.392188 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.402911 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-jsrd8" podStartSLOduration=132.402894045 podStartE2EDuration="2m12.402894045s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:43.402278955 +0000 UTC m=+152.783866992" watchObservedRunningTime="2026-01-26 20:57:43.402894045 +0000 UTC m=+152.784482082" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.415420 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:43 crc kubenswrapper[4899]: E0126 20:57:43.415808 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:43.915790536 +0000 UTC m=+153.297378573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.424782 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"53765c8d11c26541bb1694a9004c6b475026b077296d686ca89baa8d707cc2b6"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.471200 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5" event={"ID":"86c5e568-89ec-459d-bec4-8b2c0f075531","Type":"ContainerStarted","Data":"6896d62c0d986497db95b29563a39c1f311adc5eded6d56f98fb119bffdfbf99"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.496434 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt" event={"ID":"fcf5a119-adec-45a7-bd1d-758f6c1d62ac","Type":"ContainerStarted","Data":"2122a38e0e14b8a40308f515e58d9e419d5ed0fc2966be16c7506c93b2a1ae62"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.496479 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt" event={"ID":"fcf5a119-adec-45a7-bd1d-758f6c1d62ac","Type":"ContainerStarted","Data":"48cd0a08f5e80c4943fadb90bbb2407756324f026f8caa5dbeb2f95030228d35"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.509328 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5kjdm" event={"ID":"7ce93bf9-c281-45cc-9697-8a6a8eb9d6e0","Type":"ContainerStarted","Data":"60361243154c9175a3121515b7ce8b4540acdaa8034e695bad153ee5ebf022ac"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.509387 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5kjdm" event={"ID":"7ce93bf9-c281-45cc-9697-8a6a8eb9d6e0","Type":"ContainerStarted","Data":"48d1e62ecaad7be6a1f327b8f9c513ea6cd96c8ff75929ba18727c11e11870dc"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.521980 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7" event={"ID":"71935a90-1ee3-448e-a8f6-7a370ef7062c","Type":"ContainerStarted","Data":"2a068941a4387d977075b24eed585ae723b3c6801bb397152a6e6d634aa3b00d"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.523298 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:43 crc kubenswrapper[4899]: E0126 20:57:43.525438 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:44.025422051 +0000 UTC m=+153.407010088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.553302 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" event={"ID":"edfc8cc3-a964-4fa5-9ddf-fb15d33a236b","Type":"ContainerStarted","Data":"05672aa87e7a07d61575c953c77c3267f278be6f27155474922ede890ead957e"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.574819 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" event={"ID":"11d88052-a254-4fc9-ab57-54bee461f27e","Type":"ContainerStarted","Data":"4baf7ff3dab8704d145d4b32823b72e59f1cfd067a2ee9f96eb931aa5e73a485"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.607572 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz" event={"ID":"2e09a141-4aca-4102-8161-849997100ca4","Type":"ContainerStarted","Data":"03bbf50651ec3ee6d91bbc686181fcf2bed551f67076f4afb8b6045d900124ad"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.607965 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7" podStartSLOduration=131.607948102 podStartE2EDuration="2m11.607948102s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:43.607296852 +0000 UTC m=+152.988884889" watchObservedRunningTime="2026-01-26 20:57:43.607948102 +0000 UTC m=+152.989536139" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.611773 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5kjdm" podStartSLOduration=131.611756541 podStartE2EDuration="2m11.611756541s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:43.559490783 +0000 UTC m=+152.941078820" watchObservedRunningTime="2026-01-26 20:57:43.611756541 +0000 UTC m=+152.993344578" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.615290 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" event={"ID":"22530841-f07a-4811-bbdf-9964a1818e16","Type":"ContainerStarted","Data":"b076f3b560ed603f5f9c18dc008ebf80ead880bfa18bfb7f20c6927e2eaa3659"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.616098 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.624344 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.624386 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" event={"ID":"c60bec7a-6571-4594-a05f-4603f5959477","Type":"ContainerStarted","Data":"b907c947994bb9919e9c32ad546d2818366911c0fb8e2b14e0e7f0d7250b1d7e"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.624432 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" event={"ID":"c60bec7a-6571-4594-a05f-4603f5959477","Type":"ContainerStarted","Data":"56057bcc5d31e5f812d38a9930bace59e7642e972985a6e0358e27dc04728384"} Jan 26 20:57:43 crc kubenswrapper[4899]: E0126 20:57:43.625733 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:44.125715156 +0000 UTC m=+153.507303193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.627172 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.643447 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8vzmg" event={"ID":"8bec087d-1164-43d3-b119-58a88e199403","Type":"ContainerStarted","Data":"86e749c7a2b69a867b5aac8bc470cad08f4edc2fc62682d12ee3a42abf97fd8c"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.643638 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8vzmg" event={"ID":"8bec087d-1164-43d3-b119-58a88e199403","Type":"ContainerStarted","Data":"db8da739277cd22dc001161699e4f438bcf493a402372aafedf626e2565ba485"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.644523 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-8vzmg" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.646803 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zt55n" podStartSLOduration=132.646790902 podStartE2EDuration="2m12.646790902s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:43.645467961 +0000 UTC m=+153.027056008" watchObservedRunningTime="2026-01-26 20:57:43.646790902 +0000 UTC m=+153.028378939" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.658160 4899 patch_prober.go:28] interesting pod/downloads-7954f5f757-8vzmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.658213 4899 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8vzmg" podUID="8bec087d-1164-43d3-b119-58a88e199403" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.661526 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd" event={"ID":"9803ca87-d488-4845-b59d-f928fa6e45f6","Type":"ContainerStarted","Data":"38d3f3100ca430f852b788219853f568a1f59a06e13cf4d8fa89896cd5445174"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.661594 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd" event={"ID":"9803ca87-d488-4845-b59d-f928fa6e45f6","Type":"ContainerStarted","Data":"712589a2c037edbc36f5ad8fd464d7d323953c94f3cee3240b8c58f6ead8ff07"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.684731 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-8vzmg" podStartSLOduration=132.684717904 podStartE2EDuration="2m12.684717904s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:43.684074194 +0000 UTC m=+153.065662231" watchObservedRunningTime="2026-01-26 20:57:43.684717904 +0000 UTC m=+153.066305941" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.684918 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" event={"ID":"8086b1ce-02cb-465d-9191-ce5af96d2f7a","Type":"ContainerStarted","Data":"c8321994e88e442f9c6b7574819686908f1e2f7db5533c98f518b7c5be7f5c82"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.684966 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" event={"ID":"8086b1ce-02cb-465d-9191-ce5af96d2f7a","Type":"ContainerStarted","Data":"460d502ff2f7ebf7e6637f23ba683b02c200ed08a3073c74270f9c2074f3eca7"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.694079 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-spqzr" event={"ID":"f7ba54a4-6433-42a2-9bb6-2fe7ff1ca1f6","Type":"ContainerStarted","Data":"7e80e3c8a2fa0de504618b12e5256d6d2458f388cff1d403d5e09ebdb81a2d0e"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.694118 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-spqzr" event={"ID":"f7ba54a4-6433-42a2-9bb6-2fe7ff1ca1f6","Type":"ContainerStarted","Data":"20ed1f07495b0acd995ae917aa4496e5ba411f137a7feb91aee6a9fab261a059"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.705737 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nhxcw" event={"ID":"d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5","Type":"ContainerStarted","Data":"45e077321b9091a663b609fe81b0e590e30b7f03a613ea7b9d57dae226f54476"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.708905 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2" event={"ID":"7390da62-a9fa-495a-8d5a-ed2c660337cf","Type":"ContainerStarted","Data":"712649bc5d7fc47aec9f35ecd15b805f548804d89f1d1308542be056ae3cdf60"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.708961 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2" event={"ID":"7390da62-a9fa-495a-8d5a-ed2c660337cf","Type":"ContainerStarted","Data":"8c3291da14982eeb8e93b2fd43c0f7edac91f537cb79bb1f242895b01a1854f5"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.709689 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" podStartSLOduration=132.709679261 podStartE2EDuration="2m12.709679261s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:43.708006709 +0000 UTC m=+153.089594746" watchObservedRunningTime="2026-01-26 20:57:43.709679261 +0000 UTC m=+153.091267298" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.719322 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" event={"ID":"53f1cb30-6429-4ebc-8301-5f1de3e70611","Type":"ContainerStarted","Data":"c1a1e17cfd99ccf593e6b93b8739c1c5e32228cec87d21d5545d77d4a33ee1d3"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.719369 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" event={"ID":"53f1cb30-6429-4ebc-8301-5f1de3e70611","Type":"ContainerStarted","Data":"14d935c7e91a9f0fed23dd13d754230d9f609a757e63c6976fd78cf12bc4282b"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.730811 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:43 crc kubenswrapper[4899]: E0126 20:57:43.732453 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:44.23244062 +0000 UTC m=+153.614028657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.733543 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plssq" event={"ID":"6ba045ff-8d96-4b64-819d-9de471453463","Type":"ContainerStarted","Data":"d441062db3d32883857a58d8c32fd23893bf058489dc7e64605810ce6a7ba6dd"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.734315 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-spqzr" podStartSLOduration=7.734306898 podStartE2EDuration="7.734306898s" podCreationTimestamp="2026-01-26 20:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:43.734104982 +0000 UTC m=+153.115693019" watchObservedRunningTime="2026-01-26 20:57:43.734306898 +0000 UTC m=+153.115894935" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.735020 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" event={"ID":"cfa0ed4a-5d9c-4b54-b733-9a133db47307","Type":"ContainerStarted","Data":"472ddc399b26e01da1c7fd2c687f0e62ad9aa82c90843aa4d4dc8d466cdc3fd8"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.735450 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.739318 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz" event={"ID":"a8acf30a-687a-409e-a4ee-57d340449932","Type":"ContainerStarted","Data":"2e3d5d33cf3288cc8ab5b6d344dcb0d877f28198c5ff8e8a70a459df910515f2"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.739359 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz" event={"ID":"a8acf30a-687a-409e-a4ee-57d340449932","Type":"ContainerStarted","Data":"ba1e0c84fe63d221b4a795d5f9928c45aed27b329fc6057c9fec63edc6b37280"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.752156 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv" event={"ID":"fd50c155-e6f3-437e-bd5a-672325cf782c","Type":"ContainerStarted","Data":"55ae0e6dff78fe2d26c426b679f945f2c5c91b62ef902bc6e21452ca9d15e1d7"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.756206 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w" event={"ID":"bd047a32-c6c9-4376-a82b-514d2bfede44","Type":"ContainerStarted","Data":"f996f3a0719ce81106a6a333a6d3ee43b6ac4fe75028c8876918881314c5ba21"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.756257 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w" event={"ID":"bd047a32-c6c9-4376-a82b-514d2bfede44","Type":"ContainerStarted","Data":"7a8a1a19ccd04efe052e7b8de9d4d692f3e1dbf4724d30f722931a88b770aebb"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.757630 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" event={"ID":"5800f5da-f007-4a93-ab2b-97912d369526","Type":"ContainerStarted","Data":"5fb0b97aed1cc5bedb66f36c83c87870e6232c130aa6df491a45d81e8766820d"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.757653 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" event={"ID":"5800f5da-f007-4a93-ab2b-97912d369526","Type":"ContainerStarted","Data":"33f4006a9e30f80ea0891629c673123de0e1f377dcde4d89e33305230e5dc0f8"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.765100 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cqz7t" event={"ID":"27e7ffa3-46b0-4531-8bc9-45a93d9efafd","Type":"ContainerStarted","Data":"ef573d26f0f2888d7fee21a5b5e72655e5f9a9c0e58f35a941a092bb6f1dd27d"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.769057 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd" event={"ID":"9aed051f-c6e6-4694-8a2a-065e5dd6efa4","Type":"ContainerStarted","Data":"abb0fa19a5442370a6ea0a5b523bf9606cb407bb958e5b5ed13ba040634898bd"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.769099 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd" event={"ID":"9aed051f-c6e6-4694-8a2a-065e5dd6efa4","Type":"ContainerStarted","Data":"ddcd4752b38ec0b0a2f9ef7c7867323b621a7fe4287ee6e974c566a69e620eb4"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.770021 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.771243 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4" event={"ID":"5f0fd389-9264-494e-a44d-7290896b12b4","Type":"ContainerStarted","Data":"2c1e44c3a433f0a34110d7a8add1726e4fe419af00d0f9d577f806457f1d0ead"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.772577 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" event={"ID":"c98d3776-03b4-4c7c-b106-4ca47db60dac","Type":"ContainerStarted","Data":"b5c4177b927f5ad572065eeb4825658044d35cccfa36a493fb61059c478551d3"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.772602 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" event={"ID":"c98d3776-03b4-4c7c-b106-4ca47db60dac","Type":"ContainerStarted","Data":"64b6c6e648ea14fcc0713d18585e8d5635d060307ac43787c0f0b48e777e6248"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.773070 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.774049 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b2c2fb9660952a2aa82c2584518099ef0b098328166481aaa359c49b54fa780f"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.774071 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"40fffbd190e890ebc4a8b11ca7756b87cba5acbd18889c2c9cabf6a413cde026"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.774545 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.782265 4899 patch_prober.go:28] interesting pod/router-default-5444994796-9tfhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 20:57:43 crc kubenswrapper[4899]: [-]has-synced failed: reason withheld Jan 26 20:57:43 crc kubenswrapper[4899]: [+]process-running ok Jan 26 20:57:43 crc kubenswrapper[4899]: healthz check failed Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.782313 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9tfhr" podUID="997e7432-e74d-4f39-accd-a85b98f21978" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.783768 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"dbce8f5d9de360cdf8eb1b8d9fe81c248dac2810410ec357c4655165e5685a58"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.784538 4899 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xl68z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.784587 4899 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" podUID="c98d3776-03b4-4c7c-b106-4ca47db60dac" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.791351 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7jwnb" event={"ID":"79214ec9-11ec-4a5c-bfee-59ebe2caeeea","Type":"ContainerStarted","Data":"708a537b81962b260728b2452f63ad36b0a3fe840d2145ced9a460e89ff40fc3"} Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.808907 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.809191 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.809690 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n7jfd" podStartSLOduration=131.809676246 podStartE2EDuration="2m11.809676246s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:43.809026826 +0000 UTC m=+153.190614863" watchObservedRunningTime="2026-01-26 20:57:43.809676246 +0000 UTC m=+153.191264283" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.820868 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-jdxz6" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.831903 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:43 crc kubenswrapper[4899]: E0126 20:57:43.833077 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:44.333063315 +0000 UTC m=+153.714651352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.843495 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.845869 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-4v7dp" podStartSLOduration=132.845858503 podStartE2EDuration="2m12.845858503s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:43.828599466 +0000 UTC m=+153.210187503" watchObservedRunningTime="2026-01-26 20:57:43.845858503 +0000 UTC m=+153.227446540" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.868004 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd" Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.934796 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:43 crc kubenswrapper[4899]: E0126 20:57:43.937551 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:44.437536339 +0000 UTC m=+153.819124376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:43 crc kubenswrapper[4899]: I0126 20:57:43.961746 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwlnd" podStartSLOduration=131.961732103 podStartE2EDuration="2m11.961732103s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:43.92344175 +0000 UTC m=+153.305029787" watchObservedRunningTime="2026-01-26 20:57:43.961732103 +0000 UTC m=+153.343320140" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.037812 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:44 crc kubenswrapper[4899]: E0126 20:57:44.037913 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:44.537881055 +0000 UTC m=+153.919469092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.038159 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:44 crc kubenswrapper[4899]: E0126 20:57:44.038501 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:44.538487554 +0000 UTC m=+153.920075591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.049571 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h9sjz" podStartSLOduration=133.049555039 podStartE2EDuration="2m13.049555039s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:44.047731202 +0000 UTC m=+153.429319239" watchObservedRunningTime="2026-01-26 20:57:44.049555039 +0000 UTC m=+153.431143076" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.089854 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqmc2" podStartSLOduration=133.089836774 podStartE2EDuration="2m13.089836774s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:44.089366059 +0000 UTC m=+153.470954096" watchObservedRunningTime="2026-01-26 20:57:44.089836774 +0000 UTC m=+153.471424811" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.144442 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:44 crc kubenswrapper[4899]: E0126 20:57:44.144763 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:44.644747854 +0000 UTC m=+154.026335891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.193178 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv" podStartSLOduration=132.193162112 podStartE2EDuration="2m12.193162112s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:44.192227943 +0000 UTC m=+153.573815980" watchObservedRunningTime="2026-01-26 20:57:44.193162112 +0000 UTC m=+153.574750149" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.193326 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" podStartSLOduration=133.193322447 podStartE2EDuration="2m13.193322447s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:44.159471523 +0000 UTC m=+153.541059560" watchObservedRunningTime="2026-01-26 20:57:44.193322447 +0000 UTC m=+153.574910484" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.246755 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:44 crc kubenswrapper[4899]: E0126 20:57:44.247284 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:44.747267098 +0000 UTC m=+154.128855135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.253666 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cjdkv" podStartSLOduration=133.253651257 podStartE2EDuration="2m13.253651257s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:44.214469956 +0000 UTC m=+153.596057993" watchObservedRunningTime="2026-01-26 20:57:44.253651257 +0000 UTC m=+153.635239294" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.254215 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" podStartSLOduration=132.254211474 podStartE2EDuration="2m12.254211474s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:44.252204481 +0000 UTC m=+153.633792518" watchObservedRunningTime="2026-01-26 20:57:44.254211474 +0000 UTC m=+153.635799511" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.283481 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-cqz7t" podStartSLOduration=132.283444495 podStartE2EDuration="2m12.283444495s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:44.275412374 +0000 UTC m=+153.657000411" watchObservedRunningTime="2026-01-26 20:57:44.283444495 +0000 UTC m=+153.665032532" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.349871 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:44 crc kubenswrapper[4899]: E0126 20:57:44.350218 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:44.850203944 +0000 UTC m=+154.231791981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.452613 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:44 crc kubenswrapper[4899]: E0126 20:57:44.452899 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:44.952884983 +0000 UTC m=+154.334473020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.553411 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:44 crc kubenswrapper[4899]: E0126 20:57:44.553680 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:45.053664302 +0000 UTC m=+154.435252339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.655060 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:44 crc kubenswrapper[4899]: E0126 20:57:44.655471 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:45.155451763 +0000 UTC m=+154.537039810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.755654 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:44 crc kubenswrapper[4899]: E0126 20:57:44.755877 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:45.25584565 +0000 UTC m=+154.637433697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.755960 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:44 crc kubenswrapper[4899]: E0126 20:57:44.756401 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:45.256390207 +0000 UTC m=+154.637978314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.779142 4899 patch_prober.go:28] interesting pod/router-default-5444994796-9tfhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 20:57:44 crc kubenswrapper[4899]: [-]has-synced failed: reason withheld Jan 26 20:57:44 crc kubenswrapper[4899]: [+]process-running ok Jan 26 20:57:44 crc kubenswrapper[4899]: healthz check failed Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.779226 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9tfhr" podUID="997e7432-e74d-4f39-accd-a85b98f21978" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.797565 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cqz7t" event={"ID":"27e7ffa3-46b0-4531-8bc9-45a93d9efafd","Type":"ContainerStarted","Data":"24a743cb3eaf930a2b77b511f038c18257b1887f330cdff06790f2c6e6c275a2"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.799547 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plssq" event={"ID":"6ba045ff-8d96-4b64-819d-9de471453463","Type":"ContainerStarted","Data":"87acb5d76e507f434cfa295cc9c70c5c91db5b921e83acab7df0cfd956ff5b37"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.799594 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plssq" event={"ID":"6ba045ff-8d96-4b64-819d-9de471453463","Type":"ContainerStarted","Data":"ab8e4f2952977cfaba8ef1edca2e08b1a2fe1868f8136a845282ea0833ac3738"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.801448 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt" event={"ID":"fcf5a119-adec-45a7-bd1d-758f6c1d62ac","Type":"ContainerStarted","Data":"d641f9de96fdc28bd7ad17bd3bd3fdb0aa97dfddbd28a037a5f9ed7c65ec79d3"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.801563 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.802861 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" event={"ID":"50718f10-624f-4611-a5ac-d19a63806946","Type":"ContainerStarted","Data":"3a9a4c14bd5a126873db142eaf0ff2aedb1917886f3e223fd05503ab708d005b"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.803146 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.804383 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz" event={"ID":"2e09a141-4aca-4102-8161-849997100ca4","Type":"ContainerStarted","Data":"d8f055377425e5fc1baa590756dac908dd42208494bd0e9e0ac4dd1b85b9dcbc"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.806343 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" event={"ID":"11d88052-a254-4fc9-ab57-54bee461f27e","Type":"ContainerStarted","Data":"49748381fb2ca9c2f9cfaa133b721ea2478135c345e222e8c572928524c3b215"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.807904 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4" event={"ID":"5f0fd389-9264-494e-a44d-7290896b12b4","Type":"ContainerStarted","Data":"d37665862885bb1192cf452bb1fd113d049fef66147e40638c63e51142dbd63f"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.808306 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.809699 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5" event={"ID":"86c5e568-89ec-459d-bec4-8b2c0f075531","Type":"ContainerStarted","Data":"393d247d329cdf1dfc38cb0b483a770ef053d54e17e65d01b28f399a39857dcc"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.812411 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7jwnb" event={"ID":"79214ec9-11ec-4a5c-bfee-59ebe2caeeea","Type":"ContainerStarted","Data":"8eac1e61648187d19f64ee29c1b4cc88fc5ad4ba90b53980277fa61d9e94dcfe"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.812438 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7jwnb" event={"ID":"79214ec9-11ec-4a5c-bfee-59ebe2caeeea","Type":"ContainerStarted","Data":"4edcc251fc277847580ead04934f725bbe96ebb567411de0c95d362796e25f51"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.814493 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" event={"ID":"53f1cb30-6429-4ebc-8301-5f1de3e70611","Type":"ContainerStarted","Data":"0676154b4d61c321312708408141a61f603e6106b610b34f718a1c46653046f2"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.815807 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" event={"ID":"55013211-6291-4060-b512-07030b99b897","Type":"ContainerStarted","Data":"ec62c5c02caff1012a8ddfac5f3e0ffc73a24cbcec1c93bae0f72ecf8c0067d5"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.817791 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nhxcw" event={"ID":"d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5","Type":"ContainerStarted","Data":"1be35d9e3d63210b2deccab33f4cad550b26a1a5f6c36aa42cafd32c9b6e0817"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.817917 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nhxcw" event={"ID":"d1303d04-8d7d-4dec-8ad3-0e55dd94b9d5","Type":"ContainerStarted","Data":"b97d50f6c05d4058f2a2f1c5ca480244c625dda1ac5242fb331c65563a6e782e"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.818363 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-nhxcw" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.819205 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cctdv" event={"ID":"fd50c155-e6f3-437e-bd5a-672325cf782c","Type":"ContainerStarted","Data":"2525b006d099b119c7a827a87adeb1534ecd49e8342510151d582cb47a764c52"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.821041 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" event={"ID":"c60bec7a-6571-4594-a05f-4603f5959477","Type":"ContainerStarted","Data":"b92aac44ce183180003eccd21860353e03b111747e9efdc380fb7c70080d07f3"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.823177 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w" event={"ID":"bd047a32-c6c9-4376-a82b-514d2bfede44","Type":"ContainerStarted","Data":"e552b3a6369bc0482c1ddbe5f809f969ceff2993806f8aacd9e02eababf19967"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.825185 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmtk7" event={"ID":"71935a90-1ee3-448e-a8f6-7a370ef7062c","Type":"ContainerStarted","Data":"029715630d67bc7adfae182d62ebc63d65690e1b398ab15932edcfd775d556d4"} Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.827649 4899 patch_prober.go:28] interesting pod/downloads-7954f5f757-8vzmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.827702 4899 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8vzmg" podUID="8bec087d-1164-43d3-b119-58a88e199403" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.832107 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.837677 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.844168 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvbz6" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.856958 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:44 crc kubenswrapper[4899]: E0126 20:57:44.857344 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:45.357313881 +0000 UTC m=+154.738901908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.857505 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:44 crc kubenswrapper[4899]: E0126 20:57:44.858026 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:45.358005583 +0000 UTC m=+154.739593620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.868827 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plssq" podStartSLOduration=132.868797489 podStartE2EDuration="2m12.868797489s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:44.83223764 +0000 UTC m=+154.213825677" watchObservedRunningTime="2026-01-26 20:57:44.868797489 +0000 UTC m=+154.250385526" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.898724 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-68k5w" podStartSLOduration=132.89869471 podStartE2EDuration="2m12.89869471s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:44.874552308 +0000 UTC m=+154.256140345" watchObservedRunningTime="2026-01-26 20:57:44.89869471 +0000 UTC m=+154.280282737" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.937080 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-7jwnb" podStartSLOduration=132.937061185 podStartE2EDuration="2m12.937061185s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:44.929636074 +0000 UTC m=+154.311224121" watchObservedRunningTime="2026-01-26 20:57:44.937061185 +0000 UTC m=+154.318649212" Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.958829 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:44 crc kubenswrapper[4899]: E0126 20:57:44.959269 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:45.45903363 +0000 UTC m=+154.840621677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:44 crc kubenswrapper[4899]: I0126 20:57:44.959899 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:44 crc kubenswrapper[4899]: E0126 20:57:44.970694 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:45.470668842 +0000 UTC m=+154.852256879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.010676 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" podStartSLOduration=133.010102491 podStartE2EDuration="2m13.010102491s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:44.977229857 +0000 UTC m=+154.358817894" watchObservedRunningTime="2026-01-26 20:57:45.010102491 +0000 UTC m=+154.391690528" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.036145 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-lxbfv" podStartSLOduration=133.036125401 podStartE2EDuration="2m13.036125401s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:45.010002708 +0000 UTC m=+154.391590745" watchObservedRunningTime="2026-01-26 20:57:45.036125401 +0000 UTC m=+154.417713438" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.068986 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" podStartSLOduration=134.068948464 podStartE2EDuration="2m14.068948464s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:45.060256143 +0000 UTC m=+154.441844180" watchObservedRunningTime="2026-01-26 20:57:45.068948464 +0000 UTC m=+154.450536501" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.082182 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-flwsz" podStartSLOduration=133.082164246 podStartE2EDuration="2m13.082164246s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:45.08135047 +0000 UTC m=+154.462938507" watchObservedRunningTime="2026-01-26 20:57:45.082164246 +0000 UTC m=+154.463752283" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.108348 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:45 crc kubenswrapper[4899]: E0126 20:57:45.108735 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:45.608720253 +0000 UTC m=+154.990308290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.114126 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hvmc5" podStartSLOduration=133.114114011 podStartE2EDuration="2m13.114114011s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:45.11248196 +0000 UTC m=+154.494069997" watchObservedRunningTime="2026-01-26 20:57:45.114114011 +0000 UTC m=+154.495702048" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.164448 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt" podStartSLOduration=133.164429848 podStartE2EDuration="2m13.164429848s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:45.163436987 +0000 UTC m=+154.545025024" watchObservedRunningTime="2026-01-26 20:57:45.164429848 +0000 UTC m=+154.546017885" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.199267 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fl4" podStartSLOduration=133.199247013 podStartE2EDuration="2m13.199247013s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:45.197993194 +0000 UTC m=+154.579581251" watchObservedRunningTime="2026-01-26 20:57:45.199247013 +0000 UTC m=+154.580835050" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.209419 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:45 crc kubenswrapper[4899]: E0126 20:57:45.210062 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:45.710034119 +0000 UTC m=+155.091622346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.239742 4899 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.250670 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t2bcw" podStartSLOduration=133.250643744 podStartE2EDuration="2m13.250643744s" podCreationTimestamp="2026-01-26 20:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:45.249173788 +0000 UTC m=+154.630761845" watchObservedRunningTime="2026-01-26 20:57:45.250643744 +0000 UTC m=+154.632231781" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.252945 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-nhxcw" podStartSLOduration=9.252936375 podStartE2EDuration="9.252936375s" podCreationTimestamp="2026-01-26 20:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:45.227118991 +0000 UTC m=+154.608707028" watchObservedRunningTime="2026-01-26 20:57:45.252936375 +0000 UTC m=+154.634524412" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.311595 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:45 crc kubenswrapper[4899]: E0126 20:57:45.312214 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:45.812121039 +0000 UTC m=+155.193709076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.414229 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:45 crc kubenswrapper[4899]: E0126 20:57:45.414490 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:45.914477778 +0000 UTC m=+155.296065815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.432501 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w86jl"] Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.433556 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w86jl" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.435317 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.447979 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w86jl"] Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.514799 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:45 crc kubenswrapper[4899]: E0126 20:57:45.515052 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:46.015012469 +0000 UTC m=+155.396600506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.515107 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:45 crc kubenswrapper[4899]: E0126 20:57:45.515548 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:46.015538456 +0000 UTC m=+155.397126493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.615907 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:45 crc kubenswrapper[4899]: E0126 20:57:45.616081 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:46.116044237 +0000 UTC m=+155.497632274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.616404 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3080d09d-fb91-4cbf-84fe-2b96c34968ba-catalog-content\") pod \"certified-operators-w86jl\" (UID: \"3080d09d-fb91-4cbf-84fe-2b96c34968ba\") " pod="openshift-marketplace/certified-operators-w86jl" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.616441 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.616467 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3080d09d-fb91-4cbf-84fe-2b96c34968ba-utilities\") pod \"certified-operators-w86jl\" (UID: \"3080d09d-fb91-4cbf-84fe-2b96c34968ba\") " pod="openshift-marketplace/certified-operators-w86jl" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.616524 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpl4z\" (UniqueName: \"kubernetes.io/projected/3080d09d-fb91-4cbf-84fe-2b96c34968ba-kube-api-access-dpl4z\") pod \"certified-operators-w86jl\" (UID: \"3080d09d-fb91-4cbf-84fe-2b96c34968ba\") " pod="openshift-marketplace/certified-operators-w86jl" Jan 26 20:57:45 crc kubenswrapper[4899]: E0126 20:57:45.616986 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:46.116964225 +0000 UTC m=+155.498552262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.639421 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qbjp6"] Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.646859 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbjp6" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.652647 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.655357 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qbjp6"] Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.717163 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.717308 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3080d09d-fb91-4cbf-84fe-2b96c34968ba-utilities\") pod \"certified-operators-w86jl\" (UID: \"3080d09d-fb91-4cbf-84fe-2b96c34968ba\") " pod="openshift-marketplace/certified-operators-w86jl" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.717364 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpl4z\" (UniqueName: \"kubernetes.io/projected/3080d09d-fb91-4cbf-84fe-2b96c34968ba-kube-api-access-dpl4z\") pod \"certified-operators-w86jl\" (UID: \"3080d09d-fb91-4cbf-84fe-2b96c34968ba\") " pod="openshift-marketplace/certified-operators-w86jl" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.717404 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3080d09d-fb91-4cbf-84fe-2b96c34968ba-catalog-content\") pod \"certified-operators-w86jl\" (UID: \"3080d09d-fb91-4cbf-84fe-2b96c34968ba\") " pod="openshift-marketplace/certified-operators-w86jl" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.717795 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3080d09d-fb91-4cbf-84fe-2b96c34968ba-catalog-content\") pod \"certified-operators-w86jl\" (UID: \"3080d09d-fb91-4cbf-84fe-2b96c34968ba\") " pod="openshift-marketplace/certified-operators-w86jl" Jan 26 20:57:45 crc kubenswrapper[4899]: E0126 20:57:45.717578 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:46.217551579 +0000 UTC m=+155.599139606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.718010 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3080d09d-fb91-4cbf-84fe-2b96c34968ba-utilities\") pod \"certified-operators-w86jl\" (UID: \"3080d09d-fb91-4cbf-84fe-2b96c34968ba\") " pod="openshift-marketplace/certified-operators-w86jl" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.754993 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpl4z\" (UniqueName: \"kubernetes.io/projected/3080d09d-fb91-4cbf-84fe-2b96c34968ba-kube-api-access-dpl4z\") pod \"certified-operators-w86jl\" (UID: \"3080d09d-fb91-4cbf-84fe-2b96c34968ba\") " pod="openshift-marketplace/certified-operators-w86jl" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.775716 4899 patch_prober.go:28] interesting pod/router-default-5444994796-9tfhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 20:57:45 crc kubenswrapper[4899]: [-]has-synced failed: reason withheld Jan 26 20:57:45 crc kubenswrapper[4899]: [+]process-running ok Jan 26 20:57:45 crc kubenswrapper[4899]: healthz check failed Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.775768 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9tfhr" podUID="997e7432-e74d-4f39-accd-a85b98f21978" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.803422 4899 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hccs4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.803528 4899 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" podUID="50718f10-624f-4611-a5ac-d19a63806946" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.818167 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-utilities\") pod \"community-operators-qbjp6\" (UID: \"8c4e1101-fd5e-41c2-9d33-e08d7c529c70\") " pod="openshift-marketplace/community-operators-qbjp6" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.818210 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-catalog-content\") pod \"community-operators-qbjp6\" (UID: \"8c4e1101-fd5e-41c2-9d33-e08d7c529c70\") " pod="openshift-marketplace/community-operators-qbjp6" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.818240 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slmj8\" (UniqueName: \"kubernetes.io/projected/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-kube-api-access-slmj8\") pod \"community-operators-qbjp6\" (UID: \"8c4e1101-fd5e-41c2-9d33-e08d7c529c70\") " pod="openshift-marketplace/community-operators-qbjp6" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.818274 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:45 crc kubenswrapper[4899]: E0126 20:57:45.818727 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:46.31870224 +0000 UTC m=+155.700290277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.831682 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" event={"ID":"11d88052-a254-4fc9-ab57-54bee461f27e","Type":"ContainerStarted","Data":"eb9be0609a6484d3eb9bce0dfda7549d84035a1e69ffee9dfab0a1aa0cd0a11b"} Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.831725 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" event={"ID":"11d88052-a254-4fc9-ab57-54bee461f27e","Type":"ContainerStarted","Data":"e8345cf09612440dc556a614162e672cdafaf396c881cf999275e919561f2112"} Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.831736 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" event={"ID":"11d88052-a254-4fc9-ab57-54bee461f27e","Type":"ContainerStarted","Data":"f9c47e6552e72e4233222fe7d2c20e699f0b77b6759ef5d62a7868eaf46d2386"} Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.833380 4899 generic.go:334] "Generic (PLEG): container finished" podID="55013211-6291-4060-b512-07030b99b897" containerID="ec62c5c02caff1012a8ddfac5f3e0ffc73a24cbcec1c93bae0f72ecf8c0067d5" exitCode=0 Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.833548 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" event={"ID":"55013211-6291-4060-b512-07030b99b897","Type":"ContainerDied","Data":"ec62c5c02caff1012a8ddfac5f3e0ffc73a24cbcec1c93bae0f72ecf8c0067d5"} Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.834786 4899 patch_prober.go:28] interesting pod/downloads-7954f5f757-8vzmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.834856 4899 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8vzmg" podUID="8bec087d-1164-43d3-b119-58a88e199403" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.839355 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-chl8v"] Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.840420 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chl8v" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.842098 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hccs4" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.853481 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chl8v"] Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.863171 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-bnw9c" podStartSLOduration=9.863152094 podStartE2EDuration="9.863152094s" podCreationTimestamp="2026-01-26 20:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:45.862008259 +0000 UTC m=+155.243596296" watchObservedRunningTime="2026-01-26 20:57:45.863152094 +0000 UTC m=+155.244740131" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.920597 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:45 crc kubenswrapper[4899]: E0126 20:57:45.921804 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:46.42176484 +0000 UTC m=+155.803352877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.926557 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.926753 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-utilities\") pod \"community-operators-qbjp6\" (UID: \"8c4e1101-fd5e-41c2-9d33-e08d7c529c70\") " pod="openshift-marketplace/community-operators-qbjp6" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.926809 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-catalog-content\") pod \"community-operators-qbjp6\" (UID: \"8c4e1101-fd5e-41c2-9d33-e08d7c529c70\") " pod="openshift-marketplace/community-operators-qbjp6" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.926856 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slmj8\" (UniqueName: \"kubernetes.io/projected/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-kube-api-access-slmj8\") pod \"community-operators-qbjp6\" (UID: \"8c4e1101-fd5e-41c2-9d33-e08d7c529c70\") " pod="openshift-marketplace/community-operators-qbjp6" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.927612 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-utilities\") pod \"community-operators-qbjp6\" (UID: \"8c4e1101-fd5e-41c2-9d33-e08d7c529c70\") " pod="openshift-marketplace/community-operators-qbjp6" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.927735 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-catalog-content\") pod \"community-operators-qbjp6\" (UID: \"8c4e1101-fd5e-41c2-9d33-e08d7c529c70\") " pod="openshift-marketplace/community-operators-qbjp6" Jan 26 20:57:45 crc kubenswrapper[4899]: E0126 20:57:45.927883 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:46.42786795 +0000 UTC m=+155.809455987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.953817 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slmj8\" (UniqueName: \"kubernetes.io/projected/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-kube-api-access-slmj8\") pod \"community-operators-qbjp6\" (UID: \"8c4e1101-fd5e-41c2-9d33-e08d7c529c70\") " pod="openshift-marketplace/community-operators-qbjp6" Jan 26 20:57:45 crc kubenswrapper[4899]: I0126 20:57:45.962618 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbjp6" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.028889 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n87nz"] Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.030308 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.030735 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:46 crc kubenswrapper[4899]: E0126 20:57:46.031109 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 20:57:46.531081045 +0000 UTC m=+155.912669082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.031325 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bxkz\" (UniqueName: \"kubernetes.io/projected/47abc2e2-8494-4bc8-b946-46cbd5079434-kube-api-access-6bxkz\") pod \"certified-operators-chl8v\" (UID: \"47abc2e2-8494-4bc8-b946-46cbd5079434\") " pod="openshift-marketplace/certified-operators-chl8v" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.031431 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.031697 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47abc2e2-8494-4bc8-b946-46cbd5079434-catalog-content\") pod \"certified-operators-chl8v\" (UID: \"47abc2e2-8494-4bc8-b946-46cbd5079434\") " pod="openshift-marketplace/certified-operators-chl8v" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.032096 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47abc2e2-8494-4bc8-b946-46cbd5079434-utilities\") pod \"certified-operators-chl8v\" (UID: \"47abc2e2-8494-4bc8-b946-46cbd5079434\") " pod="openshift-marketplace/certified-operators-chl8v" Jan 26 20:57:46 crc kubenswrapper[4899]: E0126 20:57:46.042136 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 20:57:46.542115729 +0000 UTC m=+155.923703766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vl6d2" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.049496 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w86jl" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.059013 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n87nz"] Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.104994 4899 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T20:57:45.239775065Z","Handler":null,"Name":""} Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.120274 4899 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.120316 4899 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.133527 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.133743 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-utilities\") pod \"community-operators-n87nz\" (UID: \"0b25cc74-1abf-4d2c-b95f-7179eb518d9c\") " pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.133769 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8vw6\" (UniqueName: \"kubernetes.io/projected/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-kube-api-access-g8vw6\") pod \"community-operators-n87nz\" (UID: \"0b25cc74-1abf-4d2c-b95f-7179eb518d9c\") " pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.133798 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47abc2e2-8494-4bc8-b946-46cbd5079434-utilities\") pod \"certified-operators-chl8v\" (UID: \"47abc2e2-8494-4bc8-b946-46cbd5079434\") " pod="openshift-marketplace/certified-operators-chl8v" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.133817 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-catalog-content\") pod \"community-operators-n87nz\" (UID: \"0b25cc74-1abf-4d2c-b95f-7179eb518d9c\") " pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.133843 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bxkz\" (UniqueName: \"kubernetes.io/projected/47abc2e2-8494-4bc8-b946-46cbd5079434-kube-api-access-6bxkz\") pod \"certified-operators-chl8v\" (UID: \"47abc2e2-8494-4bc8-b946-46cbd5079434\") " pod="openshift-marketplace/certified-operators-chl8v" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.133889 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47abc2e2-8494-4bc8-b946-46cbd5079434-catalog-content\") pod \"certified-operators-chl8v\" (UID: \"47abc2e2-8494-4bc8-b946-46cbd5079434\") " pod="openshift-marketplace/certified-operators-chl8v" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.134873 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47abc2e2-8494-4bc8-b946-46cbd5079434-catalog-content\") pod \"certified-operators-chl8v\" (UID: \"47abc2e2-8494-4bc8-b946-46cbd5079434\") " pod="openshift-marketplace/certified-operators-chl8v" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.135208 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47abc2e2-8494-4bc8-b946-46cbd5079434-utilities\") pod \"certified-operators-chl8v\" (UID: \"47abc2e2-8494-4bc8-b946-46cbd5079434\") " pod="openshift-marketplace/certified-operators-chl8v" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.163569 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bxkz\" (UniqueName: \"kubernetes.io/projected/47abc2e2-8494-4bc8-b946-46cbd5079434-kube-api-access-6bxkz\") pod \"certified-operators-chl8v\" (UID: \"47abc2e2-8494-4bc8-b946-46cbd5079434\") " pod="openshift-marketplace/certified-operators-chl8v" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.176494 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chl8v" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.177269 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.234864 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-catalog-content\") pod \"community-operators-n87nz\" (UID: \"0b25cc74-1abf-4d2c-b95f-7179eb518d9c\") " pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.234945 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.235033 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-utilities\") pod \"community-operators-n87nz\" (UID: \"0b25cc74-1abf-4d2c-b95f-7179eb518d9c\") " pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.235055 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8vw6\" (UniqueName: \"kubernetes.io/projected/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-kube-api-access-g8vw6\") pod \"community-operators-n87nz\" (UID: \"0b25cc74-1abf-4d2c-b95f-7179eb518d9c\") " pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.235789 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-catalog-content\") pod \"community-operators-n87nz\" (UID: \"0b25cc74-1abf-4d2c-b95f-7179eb518d9c\") " pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.236972 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-utilities\") pod \"community-operators-n87nz\" (UID: \"0b25cc74-1abf-4d2c-b95f-7179eb518d9c\") " pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.249494 4899 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.249532 4899 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.262753 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8vw6\" (UniqueName: \"kubernetes.io/projected/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-kube-api-access-g8vw6\") pod \"community-operators-n87nz\" (UID: \"0b25cc74-1abf-4d2c-b95f-7179eb518d9c\") " pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.294747 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qbjp6"] Jan 26 20:57:46 crc kubenswrapper[4899]: W0126 20:57:46.310622 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c4e1101_fd5e_41c2_9d33_e08d7c529c70.slice/crio-73abdea89dd235b3ac4243258c93d69049966fba25cc3e454320701bff5f93c8 WatchSource:0}: Error finding container 73abdea89dd235b3ac4243258c93d69049966fba25cc3e454320701bff5f93c8: Status 404 returned error can't find the container with id 73abdea89dd235b3ac4243258c93d69049966fba25cc3e454320701bff5f93c8 Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.328496 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vl6d2\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.358656 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.379697 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w86jl"] Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.397695 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:46 crc kubenswrapper[4899]: W0126 20:57:46.448149 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3080d09d_fb91_4cbf_84fe_2b96c34968ba.slice/crio-a067fe5a51be68c9f3aa1f72bb0a34dfa178158beb5192a98fb1e1b74fdf291f WatchSource:0}: Error finding container a067fe5a51be68c9f3aa1f72bb0a34dfa178158beb5192a98fb1e1b74fdf291f: Status 404 returned error can't find the container with id a067fe5a51be68c9f3aa1f72bb0a34dfa178158beb5192a98fb1e1b74fdf291f Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.701290 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n87nz"] Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.706662 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chl8v"] Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.774483 4899 patch_prober.go:28] interesting pod/router-default-5444994796-9tfhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 20:57:46 crc kubenswrapper[4899]: [-]has-synced failed: reason withheld Jan 26 20:57:46 crc kubenswrapper[4899]: [+]process-running ok Jan 26 20:57:46 crc kubenswrapper[4899]: healthz check failed Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.774539 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9tfhr" podUID="997e7432-e74d-4f39-accd-a85b98f21978" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.869667 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chl8v" event={"ID":"47abc2e2-8494-4bc8-b946-46cbd5079434","Type":"ContainerStarted","Data":"f3f219cfd81cf720bba5d801c32cf65a3c49fc7c2a3d5a23e4f0d2a0f72fd83c"} Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.869716 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chl8v" event={"ID":"47abc2e2-8494-4bc8-b946-46cbd5079434","Type":"ContainerStarted","Data":"d85b8cdf8969c6c45c1e4e64d44ea9f8114b914c808d80f99935f73f136f5162"} Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.872867 4899 generic.go:334] "Generic (PLEG): container finished" podID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" containerID="50bfb42d74454f5ef1ffe4e7004e3b9516e86b3dea4902880e636ec63988ebc7" exitCode=0 Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.872917 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w86jl" event={"ID":"3080d09d-fb91-4cbf-84fe-2b96c34968ba","Type":"ContainerDied","Data":"50bfb42d74454f5ef1ffe4e7004e3b9516e86b3dea4902880e636ec63988ebc7"} Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.872948 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w86jl" event={"ID":"3080d09d-fb91-4cbf-84fe-2b96c34968ba","Type":"ContainerStarted","Data":"a067fe5a51be68c9f3aa1f72bb0a34dfa178158beb5192a98fb1e1b74fdf291f"} Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.874663 4899 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.877185 4899 generic.go:334] "Generic (PLEG): container finished" podID="8c4e1101-fd5e-41c2-9d33-e08d7c529c70" containerID="2f899a4c1886730645c14574b5a8716a1dd8fa8707f0351185387c2bb059444b" exitCode=0 Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.877229 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbjp6" event={"ID":"8c4e1101-fd5e-41c2-9d33-e08d7c529c70","Type":"ContainerDied","Data":"2f899a4c1886730645c14574b5a8716a1dd8fa8707f0351185387c2bb059444b"} Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.877244 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbjp6" event={"ID":"8c4e1101-fd5e-41c2-9d33-e08d7c529c70","Type":"ContainerStarted","Data":"73abdea89dd235b3ac4243258c93d69049966fba25cc3e454320701bff5f93c8"} Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.879746 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n87nz" event={"ID":"0b25cc74-1abf-4d2c-b95f-7179eb518d9c","Type":"ContainerStarted","Data":"b604546d8ecdaacbc60fd122edeafd6b21c80ce7d4e63f3ff4ffa5e570c1fab4"} Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.967271 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 20:57:46 crc kubenswrapper[4899]: I0126 20:57:46.991170 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vl6d2"] Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.091518 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.262578 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55013211-6291-4060-b512-07030b99b897-config-volume\") pod \"55013211-6291-4060-b512-07030b99b897\" (UID: \"55013211-6291-4060-b512-07030b99b897\") " Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.262629 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2ltv\" (UniqueName: \"kubernetes.io/projected/55013211-6291-4060-b512-07030b99b897-kube-api-access-b2ltv\") pod \"55013211-6291-4060-b512-07030b99b897\" (UID: \"55013211-6291-4060-b512-07030b99b897\") " Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.262742 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55013211-6291-4060-b512-07030b99b897-secret-volume\") pod \"55013211-6291-4060-b512-07030b99b897\" (UID: \"55013211-6291-4060-b512-07030b99b897\") " Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.263213 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55013211-6291-4060-b512-07030b99b897-config-volume" (OuterVolumeSpecName: "config-volume") pod "55013211-6291-4060-b512-07030b99b897" (UID: "55013211-6291-4060-b512-07030b99b897"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.267711 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55013211-6291-4060-b512-07030b99b897-kube-api-access-b2ltv" (OuterVolumeSpecName: "kube-api-access-b2ltv") pod "55013211-6291-4060-b512-07030b99b897" (UID: "55013211-6291-4060-b512-07030b99b897"). InnerVolumeSpecName "kube-api-access-b2ltv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.280769 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55013211-6291-4060-b512-07030b99b897-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "55013211-6291-4060-b512-07030b99b897" (UID: "55013211-6291-4060-b512-07030b99b897"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.363744 4899 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55013211-6291-4060-b512-07030b99b897-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.363782 4899 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55013211-6291-4060-b512-07030b99b897-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.363791 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2ltv\" (UniqueName: \"kubernetes.io/projected/55013211-6291-4060-b512-07030b99b897-kube-api-access-b2ltv\") on node \"crc\" DevicePath \"\"" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.451147 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.456398 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jtwht" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.634393 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6cqkt"] Jan 26 20:57:47 crc kubenswrapper[4899]: E0126 20:57:47.634619 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55013211-6291-4060-b512-07030b99b897" containerName="collect-profiles" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.634631 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="55013211-6291-4060-b512-07030b99b897" containerName="collect-profiles" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.634738 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="55013211-6291-4060-b512-07030b99b897" containerName="collect-profiles" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.635720 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.643294 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.658488 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cqkt"] Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.668654 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg6js\" (UniqueName: \"kubernetes.io/projected/6e1db38d-09be-44c8-b4d8-636629805c3c-kube-api-access-lg6js\") pod \"redhat-marketplace-6cqkt\" (UID: \"6e1db38d-09be-44c8-b4d8-636629805c3c\") " pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.668720 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e1db38d-09be-44c8-b4d8-636629805c3c-utilities\") pod \"redhat-marketplace-6cqkt\" (UID: \"6e1db38d-09be-44c8-b4d8-636629805c3c\") " pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.668779 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e1db38d-09be-44c8-b4d8-636629805c3c-catalog-content\") pod \"redhat-marketplace-6cqkt\" (UID: \"6e1db38d-09be-44c8-b4d8-636629805c3c\") " pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.769613 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e1db38d-09be-44c8-b4d8-636629805c3c-utilities\") pod \"redhat-marketplace-6cqkt\" (UID: \"6e1db38d-09be-44c8-b4d8-636629805c3c\") " pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.769686 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e1db38d-09be-44c8-b4d8-636629805c3c-catalog-content\") pod \"redhat-marketplace-6cqkt\" (UID: \"6e1db38d-09be-44c8-b4d8-636629805c3c\") " pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.769774 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg6js\" (UniqueName: \"kubernetes.io/projected/6e1db38d-09be-44c8-b4d8-636629805c3c-kube-api-access-lg6js\") pod \"redhat-marketplace-6cqkt\" (UID: \"6e1db38d-09be-44c8-b4d8-636629805c3c\") " pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.770505 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e1db38d-09be-44c8-b4d8-636629805c3c-utilities\") pod \"redhat-marketplace-6cqkt\" (UID: \"6e1db38d-09be-44c8-b4d8-636629805c3c\") " pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.770528 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e1db38d-09be-44c8-b4d8-636629805c3c-catalog-content\") pod \"redhat-marketplace-6cqkt\" (UID: \"6e1db38d-09be-44c8-b4d8-636629805c3c\") " pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.776062 4899 patch_prober.go:28] interesting pod/router-default-5444994796-9tfhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 20:57:47 crc kubenswrapper[4899]: [-]has-synced failed: reason withheld Jan 26 20:57:47 crc kubenswrapper[4899]: [+]process-running ok Jan 26 20:57:47 crc kubenswrapper[4899]: healthz check failed Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.776138 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9tfhr" podUID="997e7432-e74d-4f39-accd-a85b98f21978" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.790910 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg6js\" (UniqueName: \"kubernetes.io/projected/6e1db38d-09be-44c8-b4d8-636629805c3c-kube-api-access-lg6js\") pod \"redhat-marketplace-6cqkt\" (UID: \"6e1db38d-09be-44c8-b4d8-636629805c3c\") " pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.843098 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.896053 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" event={"ID":"55013211-6291-4060-b512-07030b99b897","Type":"ContainerDied","Data":"9f0107aabdae5d97e35007af673083e7e086035351357f60ce19152b2cba3840"} Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.896138 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f0107aabdae5d97e35007af673083e7e086035351357f60ce19152b2cba3840" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.896323 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.912672 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" event={"ID":"75860fb2-d5e0-449b-bd63-6f27e4a82a85","Type":"ContainerStarted","Data":"84a30206f3116dc50af8f14cf392bf946fef494a1c9cc39af6e0add9cf1bc554"} Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.912742 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" event={"ID":"75860fb2-d5e0-449b-bd63-6f27e4a82a85","Type":"ContainerStarted","Data":"c66837c76667c58a7f34d74c83a03ef57070cd618cc306030b9a14eeaabaf074"} Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.913034 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.916547 4899 generic.go:334] "Generic (PLEG): container finished" podID="47abc2e2-8494-4bc8-b946-46cbd5079434" containerID="f3f219cfd81cf720bba5d801c32cf65a3c49fc7c2a3d5a23e4f0d2a0f72fd83c" exitCode=0 Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.916621 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chl8v" event={"ID":"47abc2e2-8494-4bc8-b946-46cbd5079434","Type":"ContainerDied","Data":"f3f219cfd81cf720bba5d801c32cf65a3c49fc7c2a3d5a23e4f0d2a0f72fd83c"} Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.922825 4899 generic.go:334] "Generic (PLEG): container finished" podID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" containerID="9ff4e49b06d34aaccd8f73e7928a653908af141c0ef2bcc8e15fd84d86b1e30f" exitCode=0 Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.923654 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n87nz" event={"ID":"0b25cc74-1abf-4d2c-b95f-7179eb518d9c","Type":"ContainerDied","Data":"9ff4e49b06d34aaccd8f73e7928a653908af141c0ef2bcc8e15fd84d86b1e30f"} Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.939661 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" podStartSLOduration=136.939640248 podStartE2EDuration="2m16.939640248s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:47.929233924 +0000 UTC m=+157.310821961" watchObservedRunningTime="2026-01-26 20:57:47.939640248 +0000 UTC m=+157.321228285" Jan 26 20:57:47 crc kubenswrapper[4899]: I0126 20:57:47.976679 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.043003 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-chbw8"] Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.044028 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.060271 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-chbw8"] Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.188644 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-utilities\") pod \"redhat-marketplace-chbw8\" (UID: \"858babe5-eeb7-4ab9-a863-68e0c7a61ee7\") " pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.189107 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-catalog-content\") pod \"redhat-marketplace-chbw8\" (UID: \"858babe5-eeb7-4ab9-a863-68e0c7a61ee7\") " pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.189155 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zckm7\" (UniqueName: \"kubernetes.io/projected/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-kube-api-access-zckm7\") pod \"redhat-marketplace-chbw8\" (UID: \"858babe5-eeb7-4ab9-a863-68e0c7a61ee7\") " pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.290521 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zckm7\" (UniqueName: \"kubernetes.io/projected/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-kube-api-access-zckm7\") pod \"redhat-marketplace-chbw8\" (UID: \"858babe5-eeb7-4ab9-a863-68e0c7a61ee7\") " pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.290609 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-utilities\") pod \"redhat-marketplace-chbw8\" (UID: \"858babe5-eeb7-4ab9-a863-68e0c7a61ee7\") " pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.290632 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-catalog-content\") pod \"redhat-marketplace-chbw8\" (UID: \"858babe5-eeb7-4ab9-a863-68e0c7a61ee7\") " pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.291122 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-catalog-content\") pod \"redhat-marketplace-chbw8\" (UID: \"858babe5-eeb7-4ab9-a863-68e0c7a61ee7\") " pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.291211 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-utilities\") pod \"redhat-marketplace-chbw8\" (UID: \"858babe5-eeb7-4ab9-a863-68e0c7a61ee7\") " pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.314760 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zckm7\" (UniqueName: \"kubernetes.io/projected/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-kube-api-access-zckm7\") pod \"redhat-marketplace-chbw8\" (UID: \"858babe5-eeb7-4ab9-a863-68e0c7a61ee7\") " pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.317847 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cqkt"] Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.388301 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.649423 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-848ms"] Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.650707 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-848ms" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.652836 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.663662 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-848ms"] Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.713981 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-chbw8"] Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.771759 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.778755 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.825826 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh4nd\" (UniqueName: \"kubernetes.io/projected/0bb3afa9-f123-45d3-817a-e5232b62b483-kube-api-access-zh4nd\") pod \"redhat-operators-848ms\" (UID: \"0bb3afa9-f123-45d3-817a-e5232b62b483\") " pod="openshift-marketplace/redhat-operators-848ms" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.825952 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb3afa9-f123-45d3-817a-e5232b62b483-catalog-content\") pod \"redhat-operators-848ms\" (UID: \"0bb3afa9-f123-45d3-817a-e5232b62b483\") " pod="openshift-marketplace/redhat-operators-848ms" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.826083 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb3afa9-f123-45d3-817a-e5232b62b483-utilities\") pod \"redhat-operators-848ms\" (UID: \"0bb3afa9-f123-45d3-817a-e5232b62b483\") " pod="openshift-marketplace/redhat-operators-848ms" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.876497 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.876549 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.880421 4899 patch_prober.go:28] interesting pod/console-f9d7485db-jsrd8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.880469 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jsrd8" podUID="d8e3a3a3-4e96-4df9-baaa-8d9fbf605fdc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.927037 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh4nd\" (UniqueName: \"kubernetes.io/projected/0bb3afa9-f123-45d3-817a-e5232b62b483-kube-api-access-zh4nd\") pod \"redhat-operators-848ms\" (UID: \"0bb3afa9-f123-45d3-817a-e5232b62b483\") " pod="openshift-marketplace/redhat-operators-848ms" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.927102 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb3afa9-f123-45d3-817a-e5232b62b483-catalog-content\") pod \"redhat-operators-848ms\" (UID: \"0bb3afa9-f123-45d3-817a-e5232b62b483\") " pod="openshift-marketplace/redhat-operators-848ms" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.927172 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb3afa9-f123-45d3-817a-e5232b62b483-utilities\") pod \"redhat-operators-848ms\" (UID: \"0bb3afa9-f123-45d3-817a-e5232b62b483\") " pod="openshift-marketplace/redhat-operators-848ms" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.927659 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb3afa9-f123-45d3-817a-e5232b62b483-catalog-content\") pod \"redhat-operators-848ms\" (UID: \"0bb3afa9-f123-45d3-817a-e5232b62b483\") " pod="openshift-marketplace/redhat-operators-848ms" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.927896 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb3afa9-f123-45d3-817a-e5232b62b483-utilities\") pod \"redhat-operators-848ms\" (UID: \"0bb3afa9-f123-45d3-817a-e5232b62b483\") " pod="openshift-marketplace/redhat-operators-848ms" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.939890 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-9tfhr" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.939969 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cqkt" event={"ID":"6e1db38d-09be-44c8-b4d8-636629805c3c","Type":"ContainerStarted","Data":"0ef8dab4b79498a0c7419b1207124dc3e90275ee06b1f6328bb4caad243e8c9a"} Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.948785 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh4nd\" (UniqueName: \"kubernetes.io/projected/0bb3afa9-f123-45d3-817a-e5232b62b483-kube-api-access-zh4nd\") pod \"redhat-operators-848ms\" (UID: \"0bb3afa9-f123-45d3-817a-e5232b62b483\") " pod="openshift-marketplace/redhat-operators-848ms" Jan 26 20:57:48 crc kubenswrapper[4899]: I0126 20:57:48.968254 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-848ms" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.038463 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p49cb"] Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.039508 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.046288 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p49cb"] Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.125207 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.125875 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.132222 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.132446 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.146450 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.196185 4899 patch_prober.go:28] interesting pod/downloads-7954f5f757-8vzmg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.196234 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8vzmg" podUID="8bec087d-1164-43d3-b119-58a88e199403" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.196496 4899 patch_prober.go:28] interesting pod/downloads-7954f5f757-8vzmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.196516 4899 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8vzmg" podUID="8bec087d-1164-43d3-b119-58a88e199403" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.235857 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f44aa611-a197-45c2-b4c4-7578006901e1-catalog-content\") pod \"redhat-operators-p49cb\" (UID: \"f44aa611-a197-45c2-b4c4-7578006901e1\") " pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.236179 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt7gs\" (UniqueName: \"kubernetes.io/projected/f44aa611-a197-45c2-b4c4-7578006901e1-kube-api-access-dt7gs\") pod \"redhat-operators-p49cb\" (UID: \"f44aa611-a197-45c2-b4c4-7578006901e1\") " pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.236264 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/146d69a5-c1a8-48b3-a6d7-d20d5d5ed662-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"146d69a5-c1a8-48b3-a6d7-d20d5d5ed662\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.236358 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f44aa611-a197-45c2-b4c4-7578006901e1-utilities\") pod \"redhat-operators-p49cb\" (UID: \"f44aa611-a197-45c2-b4c4-7578006901e1\") " pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.236436 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/146d69a5-c1a8-48b3-a6d7-d20d5d5ed662-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"146d69a5-c1a8-48b3-a6d7-d20d5d5ed662\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.337876 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt7gs\" (UniqueName: \"kubernetes.io/projected/f44aa611-a197-45c2-b4c4-7578006901e1-kube-api-access-dt7gs\") pod \"redhat-operators-p49cb\" (UID: \"f44aa611-a197-45c2-b4c4-7578006901e1\") " pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.337937 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/146d69a5-c1a8-48b3-a6d7-d20d5d5ed662-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"146d69a5-c1a8-48b3-a6d7-d20d5d5ed662\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.337965 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f44aa611-a197-45c2-b4c4-7578006901e1-utilities\") pod \"redhat-operators-p49cb\" (UID: \"f44aa611-a197-45c2-b4c4-7578006901e1\") " pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.337981 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/146d69a5-c1a8-48b3-a6d7-d20d5d5ed662-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"146d69a5-c1a8-48b3-a6d7-d20d5d5ed662\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.338015 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f44aa611-a197-45c2-b4c4-7578006901e1-catalog-content\") pod \"redhat-operators-p49cb\" (UID: \"f44aa611-a197-45c2-b4c4-7578006901e1\") " pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.338486 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f44aa611-a197-45c2-b4c4-7578006901e1-catalog-content\") pod \"redhat-operators-p49cb\" (UID: \"f44aa611-a197-45c2-b4c4-7578006901e1\") " pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.338537 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/146d69a5-c1a8-48b3-a6d7-d20d5d5ed662-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"146d69a5-c1a8-48b3-a6d7-d20d5d5ed662\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.340583 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f44aa611-a197-45c2-b4c4-7578006901e1-utilities\") pod \"redhat-operators-p49cb\" (UID: \"f44aa611-a197-45c2-b4c4-7578006901e1\") " pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.375628 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/146d69a5-c1a8-48b3-a6d7-d20d5d5ed662-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"146d69a5-c1a8-48b3-a6d7-d20d5d5ed662\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.375846 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt7gs\" (UniqueName: \"kubernetes.io/projected/f44aa611-a197-45c2-b4c4-7578006901e1-kube-api-access-dt7gs\") pod \"redhat-operators-p49cb\" (UID: \"f44aa611-a197-45c2-b4c4-7578006901e1\") " pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.550837 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-848ms"] Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.570367 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 20:57:49 crc kubenswrapper[4899]: W0126 20:57:49.597035 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bb3afa9_f123_45d3_817a_e5232b62b483.slice/crio-cf5f04f060cc3d6bce32e5b6f87a3e84cc5d676ceab85c803b2beb929507970e WatchSource:0}: Error finding container cf5f04f060cc3d6bce32e5b6f87a3e84cc5d676ceab85c803b2beb929507970e: Status 404 returned error can't find the container with id cf5f04f060cc3d6bce32e5b6f87a3e84cc5d676ceab85c803b2beb929507970e Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.655446 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.942935 4899 generic.go:334] "Generic (PLEG): container finished" podID="6e1db38d-09be-44c8-b4d8-636629805c3c" containerID="49f9e92ecb41c5efdffa90625b1cfa6a425a259d6bf1a614e540547c535781f0" exitCode=0 Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.943288 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cqkt" event={"ID":"6e1db38d-09be-44c8-b4d8-636629805c3c","Type":"ContainerDied","Data":"49f9e92ecb41c5efdffa90625b1cfa6a425a259d6bf1a614e540547c535781f0"} Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.969287 4899 generic.go:334] "Generic (PLEG): container finished" podID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" containerID="e5b0b696ffee92520077d5293ed7ea1986f381da6ad03bb2aebfbcda74e2d79b" exitCode=0 Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.969386 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chbw8" event={"ID":"858babe5-eeb7-4ab9-a863-68e0c7a61ee7","Type":"ContainerDied","Data":"e5b0b696ffee92520077d5293ed7ea1986f381da6ad03bb2aebfbcda74e2d79b"} Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.969416 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chbw8" event={"ID":"858babe5-eeb7-4ab9-a863-68e0c7a61ee7","Type":"ContainerStarted","Data":"8755935fb1d21df6401188ea5a53c91ac91fcfb6962b61cff26aeb441c8849cb"} Jan 26 20:57:49 crc kubenswrapper[4899]: I0126 20:57:49.996259 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-848ms" event={"ID":"0bb3afa9-f123-45d3-817a-e5232b62b483","Type":"ContainerStarted","Data":"cf5f04f060cc3d6bce32e5b6f87a3e84cc5d676ceab85c803b2beb929507970e"} Jan 26 20:57:50 crc kubenswrapper[4899]: I0126 20:57:50.154021 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p49cb"] Jan 26 20:57:50 crc kubenswrapper[4899]: I0126 20:57:50.157656 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 20:57:51 crc kubenswrapper[4899]: I0126 20:57:51.039759 4899 generic.go:334] "Generic (PLEG): container finished" podID="f44aa611-a197-45c2-b4c4-7578006901e1" containerID="2ef4af0028480394b65fcb965b196de4876ca75758e59d93965dac1a98b608d6" exitCode=0 Jan 26 20:57:51 crc kubenswrapper[4899]: I0126 20:57:51.040149 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p49cb" event={"ID":"f44aa611-a197-45c2-b4c4-7578006901e1","Type":"ContainerDied","Data":"2ef4af0028480394b65fcb965b196de4876ca75758e59d93965dac1a98b608d6"} Jan 26 20:57:51 crc kubenswrapper[4899]: I0126 20:57:51.040174 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p49cb" event={"ID":"f44aa611-a197-45c2-b4c4-7578006901e1","Type":"ContainerStarted","Data":"3e7f88f8617d71a83d6c9ef46a2d6068ca88038a22e3ed29d0632bc5176aa831"} Jan 26 20:57:51 crc kubenswrapper[4899]: I0126 20:57:51.065851 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"146d69a5-c1a8-48b3-a6d7-d20d5d5ed662","Type":"ContainerStarted","Data":"6716f85816c8968d431be57ffa3afc36632218cbb29ac1f769793419ad1cc1d0"} Jan 26 20:57:51 crc kubenswrapper[4899]: I0126 20:57:51.065905 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"146d69a5-c1a8-48b3-a6d7-d20d5d5ed662","Type":"ContainerStarted","Data":"497a88a3a677d874906b3234088d9d3a3701bad2293b66ff6c587a8e51eabd03"} Jan 26 20:57:51 crc kubenswrapper[4899]: I0126 20:57:51.089487 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.089472047 podStartE2EDuration="2.089472047s" podCreationTimestamp="2026-01-26 20:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:57:51.088241399 +0000 UTC m=+160.469829436" watchObservedRunningTime="2026-01-26 20:57:51.089472047 +0000 UTC m=+160.471060084" Jan 26 20:57:51 crc kubenswrapper[4899]: I0126 20:57:51.119090 4899 generic.go:334] "Generic (PLEG): container finished" podID="0bb3afa9-f123-45d3-817a-e5232b62b483" containerID="34aa22a144a1c6510a2f654b0b4486d70cd94617cdb527361251f12ef66d8812" exitCode=0 Jan 26 20:57:51 crc kubenswrapper[4899]: I0126 20:57:51.119138 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-848ms" event={"ID":"0bb3afa9-f123-45d3-817a-e5232b62b483","Type":"ContainerDied","Data":"34aa22a144a1c6510a2f654b0b4486d70cd94617cdb527361251f12ef66d8812"} Jan 26 20:57:52 crc kubenswrapper[4899]: I0126 20:57:52.133379 4899 generic.go:334] "Generic (PLEG): container finished" podID="146d69a5-c1a8-48b3-a6d7-d20d5d5ed662" containerID="6716f85816c8968d431be57ffa3afc36632218cbb29ac1f769793419ad1cc1d0" exitCode=0 Jan 26 20:57:52 crc kubenswrapper[4899]: I0126 20:57:52.133569 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"146d69a5-c1a8-48b3-a6d7-d20d5d5ed662","Type":"ContainerDied","Data":"6716f85816c8968d431be57ffa3afc36632218cbb29ac1f769793419ad1cc1d0"} Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.498691 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.543246 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/146d69a5-c1a8-48b3-a6d7-d20d5d5ed662-kubelet-dir\") pod \"146d69a5-c1a8-48b3-a6d7-d20d5d5ed662\" (UID: \"146d69a5-c1a8-48b3-a6d7-d20d5d5ed662\") " Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.543324 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/146d69a5-c1a8-48b3-a6d7-d20d5d5ed662-kube-api-access\") pod \"146d69a5-c1a8-48b3-a6d7-d20d5d5ed662\" (UID: \"146d69a5-c1a8-48b3-a6d7-d20d5d5ed662\") " Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.544318 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/146d69a5-c1a8-48b3-a6d7-d20d5d5ed662-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "146d69a5-c1a8-48b3-a6d7-d20d5d5ed662" (UID: "146d69a5-c1a8-48b3-a6d7-d20d5d5ed662"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.554564 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/146d69a5-c1a8-48b3-a6d7-d20d5d5ed662-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "146d69a5-c1a8-48b3-a6d7-d20d5d5ed662" (UID: "146d69a5-c1a8-48b3-a6d7-d20d5d5ed662"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.647116 4899 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/146d69a5-c1a8-48b3-a6d7-d20d5d5ed662-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.647224 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/146d69a5-c1a8-48b3-a6d7-d20d5d5ed662-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.751274 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 20:57:53 crc kubenswrapper[4899]: E0126 20:57:53.751465 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146d69a5-c1a8-48b3-a6d7-d20d5d5ed662" containerName="pruner" Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.751475 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="146d69a5-c1a8-48b3-a6d7-d20d5d5ed662" containerName="pruner" Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.751586 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="146d69a5-c1a8-48b3-a6d7-d20d5d5ed662" containerName="pruner" Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.752317 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.756057 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.756280 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.776792 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.856084 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1a8c64f-5fcd-4868-a4df-82d097332e7b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e1a8c64f-5fcd-4868-a4df-82d097332e7b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.856192 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1a8c64f-5fcd-4868-a4df-82d097332e7b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e1a8c64f-5fcd-4868-a4df-82d097332e7b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.957524 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1a8c64f-5fcd-4868-a4df-82d097332e7b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e1a8c64f-5fcd-4868-a4df-82d097332e7b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.957621 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1a8c64f-5fcd-4868-a4df-82d097332e7b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e1a8c64f-5fcd-4868-a4df-82d097332e7b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.957726 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1a8c64f-5fcd-4868-a4df-82d097332e7b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e1a8c64f-5fcd-4868-a4df-82d097332e7b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 20:57:53 crc kubenswrapper[4899]: I0126 20:57:53.996991 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1a8c64f-5fcd-4868-a4df-82d097332e7b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e1a8c64f-5fcd-4868-a4df-82d097332e7b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 20:57:54 crc kubenswrapper[4899]: I0126 20:57:54.091572 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 20:57:54 crc kubenswrapper[4899]: I0126 20:57:54.171532 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"146d69a5-c1a8-48b3-a6d7-d20d5d5ed662","Type":"ContainerDied","Data":"497a88a3a677d874906b3234088d9d3a3701bad2293b66ff6c587a8e51eabd03"} Jan 26 20:57:54 crc kubenswrapper[4899]: I0126 20:57:54.171576 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="497a88a3a677d874906b3234088d9d3a3701bad2293b66ff6c587a8e51eabd03" Jan 26 20:57:54 crc kubenswrapper[4899]: I0126 20:57:54.171591 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 20:57:54 crc kubenswrapper[4899]: I0126 20:57:54.576661 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs\") pod \"network-metrics-daemon-5s8xd\" (UID: \"88f49476-befa-4689-91cb-c0a8cc1def3d\") " pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:54 crc kubenswrapper[4899]: I0126 20:57:54.591762 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88f49476-befa-4689-91cb-c0a8cc1def3d-metrics-certs\") pod \"network-metrics-daemon-5s8xd\" (UID: \"88f49476-befa-4689-91cb-c0a8cc1def3d\") " pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:54 crc kubenswrapper[4899]: I0126 20:57:54.710136 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-nhxcw" Jan 26 20:57:54 crc kubenswrapper[4899]: I0126 20:57:54.755821 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5s8xd" Jan 26 20:57:58 crc kubenswrapper[4899]: I0126 20:57:58.880604 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:58 crc kubenswrapper[4899]: I0126 20:57:58.884959 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-jsrd8" Jan 26 20:57:59 crc kubenswrapper[4899]: I0126 20:57:59.197530 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-8vzmg" Jan 26 20:58:00 crc kubenswrapper[4899]: I0126 20:58:00.109434 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:58:00 crc kubenswrapper[4899]: I0126 20:58:00.109535 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:58:06 crc kubenswrapper[4899]: I0126 20:58:06.410334 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 20:58:19 crc kubenswrapper[4899]: I0126 20:58:19.791860 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 20:58:19 crc kubenswrapper[4899]: I0126 20:58:19.950581 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-h5tpt" Jan 26 20:58:29 crc kubenswrapper[4899]: I0126 20:58:29.553388 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 20:58:29 crc kubenswrapper[4899]: I0126 20:58:29.555034 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 20:58:29 crc kubenswrapper[4899]: I0126 20:58:29.567853 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 20:58:29 crc kubenswrapper[4899]: I0126 20:58:29.708577 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7207368-5f89-416a-8669-f453e80097e2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e7207368-5f89-416a-8669-f453e80097e2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 20:58:29 crc kubenswrapper[4899]: I0126 20:58:29.708643 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7207368-5f89-416a-8669-f453e80097e2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e7207368-5f89-416a-8669-f453e80097e2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 20:58:29 crc kubenswrapper[4899]: I0126 20:58:29.810598 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7207368-5f89-416a-8669-f453e80097e2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e7207368-5f89-416a-8669-f453e80097e2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 20:58:29 crc kubenswrapper[4899]: I0126 20:58:29.810745 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7207368-5f89-416a-8669-f453e80097e2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e7207368-5f89-416a-8669-f453e80097e2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 20:58:29 crc kubenswrapper[4899]: I0126 20:58:29.810792 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7207368-5f89-416a-8669-f453e80097e2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e7207368-5f89-416a-8669-f453e80097e2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 20:58:29 crc kubenswrapper[4899]: I0126 20:58:29.832106 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7207368-5f89-416a-8669-f453e80097e2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e7207368-5f89-416a-8669-f453e80097e2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 20:58:29 crc kubenswrapper[4899]: I0126 20:58:29.889821 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 20:58:30 crc kubenswrapper[4899]: I0126 20:58:30.110172 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:58:30 crc kubenswrapper[4899]: I0126 20:58:30.110252 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:58:30 crc kubenswrapper[4899]: I0126 20:58:30.837307 4899 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-r8lh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 20:58:30 crc kubenswrapper[4899]: I0126 20:58:30.837387 4899 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8lh9" podUID="cfa0ed4a-5d9c-4b54-b733-9a133db47307" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 20:58:32 crc kubenswrapper[4899]: E0126 20:58:32.585536 4899 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 20:58:32 crc kubenswrapper[4899]: E0126 20:58:32.586161 4899 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-slmj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qbjp6_openshift-marketplace(8c4e1101-fd5e-41c2-9d33-e08d7c529c70): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 20:58:32 crc kubenswrapper[4899]: E0126 20:58:32.587471 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-qbjp6" podUID="8c4e1101-fd5e-41c2-9d33-e08d7c529c70" Jan 26 20:58:33 crc kubenswrapper[4899]: I0126 20:58:33.950179 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 20:58:33 crc kubenswrapper[4899]: I0126 20:58:33.955040 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 20:58:33 crc kubenswrapper[4899]: I0126 20:58:33.955285 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 20:58:33 crc kubenswrapper[4899]: I0126 20:58:33.980764 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-var-lock\") pod \"installer-9-crc\" (UID: \"bb6f8e1b-1528-4285-ab7f-2808df5f1b29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 20:58:33 crc kubenswrapper[4899]: I0126 20:58:33.981065 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bb6f8e1b-1528-4285-ab7f-2808df5f1b29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 20:58:33 crc kubenswrapper[4899]: I0126 20:58:33.981151 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-kube-api-access\") pod \"installer-9-crc\" (UID: \"bb6f8e1b-1528-4285-ab7f-2808df5f1b29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 20:58:34 crc kubenswrapper[4899]: I0126 20:58:34.082396 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-var-lock\") pod \"installer-9-crc\" (UID: \"bb6f8e1b-1528-4285-ab7f-2808df5f1b29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 20:58:34 crc kubenswrapper[4899]: I0126 20:58:34.082502 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bb6f8e1b-1528-4285-ab7f-2808df5f1b29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 20:58:34 crc kubenswrapper[4899]: I0126 20:58:34.082541 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-kube-api-access\") pod \"installer-9-crc\" (UID: \"bb6f8e1b-1528-4285-ab7f-2808df5f1b29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 20:58:34 crc kubenswrapper[4899]: I0126 20:58:34.082539 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-var-lock\") pod \"installer-9-crc\" (UID: \"bb6f8e1b-1528-4285-ab7f-2808df5f1b29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 20:58:34 crc kubenswrapper[4899]: I0126 20:58:34.082610 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bb6f8e1b-1528-4285-ab7f-2808df5f1b29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 20:58:34 crc kubenswrapper[4899]: I0126 20:58:34.099492 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-kube-api-access\") pod \"installer-9-crc\" (UID: \"bb6f8e1b-1528-4285-ab7f-2808df5f1b29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 20:58:34 crc kubenswrapper[4899]: I0126 20:58:34.282250 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 20:58:39 crc kubenswrapper[4899]: E0126 20:58:39.423447 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qbjp6" podUID="8c4e1101-fd5e-41c2-9d33-e08d7c529c70" Jan 26 20:58:39 crc kubenswrapper[4899]: E0126 20:58:39.751100 4899 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 20:58:39 crc kubenswrapper[4899]: E0126 20:58:39.751282 4899 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dpl4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-w86jl_openshift-marketplace(3080d09d-fb91-4cbf-84fe-2b96c34968ba): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 20:58:39 crc kubenswrapper[4899]: E0126 20:58:39.753323 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-w86jl" podUID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" Jan 26 20:58:40 crc kubenswrapper[4899]: E0126 20:58:40.876854 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-w86jl" podUID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" Jan 26 20:58:40 crc kubenswrapper[4899]: E0126 20:58:40.931610 4899 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 20:58:40 crc kubenswrapper[4899]: E0126 20:58:40.931789 4899 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lg6js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6cqkt_openshift-marketplace(6e1db38d-09be-44c8-b4d8-636629805c3c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 20:58:40 crc kubenswrapper[4899]: E0126 20:58:40.932888 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6cqkt" podUID="6e1db38d-09be-44c8-b4d8-636629805c3c" Jan 26 20:58:40 crc kubenswrapper[4899]: E0126 20:58:40.963823 4899 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 20:58:40 crc kubenswrapper[4899]: E0126 20:58:40.964026 4899 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zckm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-chbw8_openshift-marketplace(858babe5-eeb7-4ab9-a863-68e0c7a61ee7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 20:58:40 crc kubenswrapper[4899]: E0126 20:58:40.965426 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-chbw8" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" Jan 26 20:58:41 crc kubenswrapper[4899]: I0126 20:58:41.119889 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.033103 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6cqkt" podUID="6e1db38d-09be-44c8-b4d8-636629805c3c" Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.033390 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-chbw8" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" Jan 26 20:58:44 crc kubenswrapper[4899]: W0126 20:58:44.056158 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode1a8c64f_5fcd_4868_a4df_82d097332e7b.slice/crio-29a9bd7a6398d429a37ecfa5985dddcccc0b80168f75152c8855dc4e411b50ad WatchSource:0}: Error finding container 29a9bd7a6398d429a37ecfa5985dddcccc0b80168f75152c8855dc4e411b50ad: Status 404 returned error can't find the container with id 29a9bd7a6398d429a37ecfa5985dddcccc0b80168f75152c8855dc4e411b50ad Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.159576 4899 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.160146 4899 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bxkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-chl8v_openshift-marketplace(47abc2e2-8494-4bc8-b946-46cbd5079434): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.162951 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-chl8v" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.164853 4899 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.165013 4899 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dt7gs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-p49cb_openshift-marketplace(f44aa611-a197-45c2-b4c4-7578006901e1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.166158 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-p49cb" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.174343 4899 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.174497 4899 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zh4nd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-848ms_openshift-marketplace(0bb3afa9-f123-45d3-817a-e5232b62b483): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.176218 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-848ms" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" Jan 26 20:58:44 crc kubenswrapper[4899]: I0126 20:58:44.291041 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5s8xd"] Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.324249 4899 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.324386 4899 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g8vw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-n87nz_openshift-marketplace(0b25cc74-1abf-4d2c-b95f-7179eb518d9c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.326353 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-n87nz" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" Jan 26 20:58:44 crc kubenswrapper[4899]: I0126 20:58:44.478644 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" event={"ID":"88f49476-befa-4689-91cb-c0a8cc1def3d","Type":"ContainerStarted","Data":"09fc7836f245ae7bae9ca9f0560c62e98965f48711969f361c32da895d6fe071"} Jan 26 20:58:44 crc kubenswrapper[4899]: I0126 20:58:44.480288 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e1a8c64f-5fcd-4868-a4df-82d097332e7b","Type":"ContainerStarted","Data":"29a9bd7a6398d429a37ecfa5985dddcccc0b80168f75152c8855dc4e411b50ad"} Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.482301 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-n87nz" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.490752 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-chl8v" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.490752 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-848ms" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" Jan 26 20:58:44 crc kubenswrapper[4899]: E0126 20:58:44.490867 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-p49cb" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" Jan 26 20:58:44 crc kubenswrapper[4899]: I0126 20:58:44.552898 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 20:58:44 crc kubenswrapper[4899]: W0126 20:58:44.562208 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbb6f8e1b_1528_4285_ab7f_2808df5f1b29.slice/crio-3437690d0a5f35353a01ddc0d1b1a9a257d44a2623c7baa3203976b1670d5e03 WatchSource:0}: Error finding container 3437690d0a5f35353a01ddc0d1b1a9a257d44a2623c7baa3203976b1670d5e03: Status 404 returned error can't find the container with id 3437690d0a5f35353a01ddc0d1b1a9a257d44a2623c7baa3203976b1670d5e03 Jan 26 20:58:44 crc kubenswrapper[4899]: I0126 20:58:44.566967 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 20:58:45 crc kubenswrapper[4899]: I0126 20:58:45.496960 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" event={"ID":"88f49476-befa-4689-91cb-c0a8cc1def3d","Type":"ContainerStarted","Data":"b974a48ac6de234754d135cc1cba00e90476b5d1faf072cea9ac3ceff1422e00"} Jan 26 20:58:45 crc kubenswrapper[4899]: I0126 20:58:45.497897 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5s8xd" event={"ID":"88f49476-befa-4689-91cb-c0a8cc1def3d","Type":"ContainerStarted","Data":"75bb3f6bc6aa572d447c610f01523d9887155a014b2c13fd97a421927a7f3c56"} Jan 26 20:58:45 crc kubenswrapper[4899]: I0126 20:58:45.500846 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bb6f8e1b-1528-4285-ab7f-2808df5f1b29","Type":"ContainerStarted","Data":"a36cdae073fac2ab9c07e9c2543723b1a413aaac95e12ea9a935376a11e3c989"} Jan 26 20:58:45 crc kubenswrapper[4899]: I0126 20:58:45.500879 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bb6f8e1b-1528-4285-ab7f-2808df5f1b29","Type":"ContainerStarted","Data":"3437690d0a5f35353a01ddc0d1b1a9a257d44a2623c7baa3203976b1670d5e03"} Jan 26 20:58:45 crc kubenswrapper[4899]: I0126 20:58:45.506454 4899 generic.go:334] "Generic (PLEG): container finished" podID="e1a8c64f-5fcd-4868-a4df-82d097332e7b" containerID="e3cd38806e0e558e0de50da03571eb5cb56b29524b4833cf52bd5559c85fdb79" exitCode=0 Jan 26 20:58:45 crc kubenswrapper[4899]: I0126 20:58:45.506659 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e1a8c64f-5fcd-4868-a4df-82d097332e7b","Type":"ContainerDied","Data":"e3cd38806e0e558e0de50da03571eb5cb56b29524b4833cf52bd5559c85fdb79"} Jan 26 20:58:45 crc kubenswrapper[4899]: I0126 20:58:45.509181 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e7207368-5f89-416a-8669-f453e80097e2","Type":"ContainerStarted","Data":"d3fb5ed6f1603968843b930f7ffcb181cdeb3b544c8e84d9cff8b42d19903122"} Jan 26 20:58:45 crc kubenswrapper[4899]: I0126 20:58:45.509265 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e7207368-5f89-416a-8669-f453e80097e2","Type":"ContainerStarted","Data":"86cd668cd535a656fee1b3fc730681daa212b9ef3214e7442da2959e41d1120e"} Jan 26 20:58:45 crc kubenswrapper[4899]: I0126 20:58:45.526678 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5s8xd" podStartSLOduration=194.52665749 podStartE2EDuration="3m14.52665749s" podCreationTimestamp="2026-01-26 20:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:58:45.519888234 +0000 UTC m=+214.901476301" watchObservedRunningTime="2026-01-26 20:58:45.52665749 +0000 UTC m=+214.908245537" Jan 26 20:58:45 crc kubenswrapper[4899]: I0126 20:58:45.548353 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=12.548309575 podStartE2EDuration="12.548309575s" podCreationTimestamp="2026-01-26 20:58:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:58:45.546175944 +0000 UTC m=+214.927763991" watchObservedRunningTime="2026-01-26 20:58:45.548309575 +0000 UTC m=+214.929897652" Jan 26 20:58:45 crc kubenswrapper[4899]: I0126 20:58:45.580416 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=16.580392999 podStartE2EDuration="16.580392999s" podCreationTimestamp="2026-01-26 20:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:58:45.577363698 +0000 UTC m=+214.958951735" watchObservedRunningTime="2026-01-26 20:58:45.580392999 +0000 UTC m=+214.961981036" Jan 26 20:58:46 crc kubenswrapper[4899]: I0126 20:58:46.516682 4899 generic.go:334] "Generic (PLEG): container finished" podID="e7207368-5f89-416a-8669-f453e80097e2" containerID="d3fb5ed6f1603968843b930f7ffcb181cdeb3b544c8e84d9cff8b42d19903122" exitCode=0 Jan 26 20:58:46 crc kubenswrapper[4899]: I0126 20:58:46.516763 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e7207368-5f89-416a-8669-f453e80097e2","Type":"ContainerDied","Data":"d3fb5ed6f1603968843b930f7ffcb181cdeb3b544c8e84d9cff8b42d19903122"} Jan 26 20:58:46 crc kubenswrapper[4899]: I0126 20:58:46.715445 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 20:58:46 crc kubenswrapper[4899]: I0126 20:58:46.845105 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1a8c64f-5fcd-4868-a4df-82d097332e7b-kubelet-dir\") pod \"e1a8c64f-5fcd-4868-a4df-82d097332e7b\" (UID: \"e1a8c64f-5fcd-4868-a4df-82d097332e7b\") " Jan 26 20:58:46 crc kubenswrapper[4899]: I0126 20:58:46.845207 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1a8c64f-5fcd-4868-a4df-82d097332e7b-kube-api-access\") pod \"e1a8c64f-5fcd-4868-a4df-82d097332e7b\" (UID: \"e1a8c64f-5fcd-4868-a4df-82d097332e7b\") " Jan 26 20:58:46 crc kubenswrapper[4899]: I0126 20:58:46.845258 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a8c64f-5fcd-4868-a4df-82d097332e7b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e1a8c64f-5fcd-4868-a4df-82d097332e7b" (UID: "e1a8c64f-5fcd-4868-a4df-82d097332e7b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:58:46 crc kubenswrapper[4899]: I0126 20:58:46.845548 4899 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1a8c64f-5fcd-4868-a4df-82d097332e7b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 20:58:46 crc kubenswrapper[4899]: I0126 20:58:46.852149 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a8c64f-5fcd-4868-a4df-82d097332e7b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e1a8c64f-5fcd-4868-a4df-82d097332e7b" (UID: "e1a8c64f-5fcd-4868-a4df-82d097332e7b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:58:46 crc kubenswrapper[4899]: I0126 20:58:46.946424 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1a8c64f-5fcd-4868-a4df-82d097332e7b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 20:58:47 crc kubenswrapper[4899]: I0126 20:58:47.524480 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e1a8c64f-5fcd-4868-a4df-82d097332e7b","Type":"ContainerDied","Data":"29a9bd7a6398d429a37ecfa5985dddcccc0b80168f75152c8855dc4e411b50ad"} Jan 26 20:58:47 crc kubenswrapper[4899]: I0126 20:58:47.524531 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29a9bd7a6398d429a37ecfa5985dddcccc0b80168f75152c8855dc4e411b50ad" Jan 26 20:58:47 crc kubenswrapper[4899]: I0126 20:58:47.524625 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 20:58:47 crc kubenswrapper[4899]: I0126 20:58:47.836984 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 20:58:47 crc kubenswrapper[4899]: I0126 20:58:47.980122 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7207368-5f89-416a-8669-f453e80097e2-kubelet-dir\") pod \"e7207368-5f89-416a-8669-f453e80097e2\" (UID: \"e7207368-5f89-416a-8669-f453e80097e2\") " Jan 26 20:58:47 crc kubenswrapper[4899]: I0126 20:58:47.980220 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7207368-5f89-416a-8669-f453e80097e2-kube-api-access\") pod \"e7207368-5f89-416a-8669-f453e80097e2\" (UID: \"e7207368-5f89-416a-8669-f453e80097e2\") " Jan 26 20:58:47 crc kubenswrapper[4899]: I0126 20:58:47.980245 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7207368-5f89-416a-8669-f453e80097e2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e7207368-5f89-416a-8669-f453e80097e2" (UID: "e7207368-5f89-416a-8669-f453e80097e2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:58:47 crc kubenswrapper[4899]: I0126 20:58:47.980537 4899 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7207368-5f89-416a-8669-f453e80097e2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 20:58:47 crc kubenswrapper[4899]: I0126 20:58:47.986198 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7207368-5f89-416a-8669-f453e80097e2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7207368-5f89-416a-8669-f453e80097e2" (UID: "e7207368-5f89-416a-8669-f453e80097e2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:58:48 crc kubenswrapper[4899]: I0126 20:58:48.083668 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7207368-5f89-416a-8669-f453e80097e2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 20:58:48 crc kubenswrapper[4899]: I0126 20:58:48.531056 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e7207368-5f89-416a-8669-f453e80097e2","Type":"ContainerDied","Data":"86cd668cd535a656fee1b3fc730681daa212b9ef3214e7442da2959e41d1120e"} Jan 26 20:58:48 crc kubenswrapper[4899]: I0126 20:58:48.531453 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86cd668cd535a656fee1b3fc730681daa212b9ef3214e7442da2959e41d1120e" Jan 26 20:58:48 crc kubenswrapper[4899]: I0126 20:58:48.531115 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 20:58:55 crc kubenswrapper[4899]: I0126 20:58:55.564453 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbjp6" event={"ID":"8c4e1101-fd5e-41c2-9d33-e08d7c529c70","Type":"ContainerStarted","Data":"9c7f367667d56f86df2c9ee0936f992e811203912375326ca987b46a4ddb0bfd"} Jan 26 20:58:56 crc kubenswrapper[4899]: I0126 20:58:56.576551 4899 generic.go:334] "Generic (PLEG): container finished" podID="8c4e1101-fd5e-41c2-9d33-e08d7c529c70" containerID="9c7f367667d56f86df2c9ee0936f992e811203912375326ca987b46a4ddb0bfd" exitCode=0 Jan 26 20:58:56 crc kubenswrapper[4899]: I0126 20:58:56.576677 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbjp6" event={"ID":"8c4e1101-fd5e-41c2-9d33-e08d7c529c70","Type":"ContainerDied","Data":"9c7f367667d56f86df2c9ee0936f992e811203912375326ca987b46a4ddb0bfd"} Jan 26 20:59:00 crc kubenswrapper[4899]: I0126 20:59:00.109364 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:59:00 crc kubenswrapper[4899]: I0126 20:59:00.110302 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:59:00 crc kubenswrapper[4899]: I0126 20:59:00.110358 4899 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 20:59:00 crc kubenswrapper[4899]: I0126 20:59:00.111168 4899 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4"} pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:59:00 crc kubenswrapper[4899]: I0126 20:59:00.111726 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" containerID="cri-o://16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4" gracePeriod=600 Jan 26 20:59:00 crc kubenswrapper[4899]: I0126 20:59:00.595420 4899 generic.go:334] "Generic (PLEG): container finished" podID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" containerID="45628208349d6ceffcc04ac020d84a69252f63e9613bbf6cd62cf799e92f897b" exitCode=0 Jan 26 20:59:00 crc kubenswrapper[4899]: I0126 20:59:00.595476 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chbw8" event={"ID":"858babe5-eeb7-4ab9-a863-68e0c7a61ee7","Type":"ContainerDied","Data":"45628208349d6ceffcc04ac020d84a69252f63e9613bbf6cd62cf799e92f897b"} Jan 26 20:59:00 crc kubenswrapper[4899]: I0126 20:59:00.615131 4899 generic.go:334] "Generic (PLEG): container finished" podID="6e1db38d-09be-44c8-b4d8-636629805c3c" containerID="d0ae5c4fe9de3e733bb6d41a39af3631f007b44eb0a9860f8198562be7d60a73" exitCode=0 Jan 26 20:59:00 crc kubenswrapper[4899]: I0126 20:59:00.615202 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cqkt" event={"ID":"6e1db38d-09be-44c8-b4d8-636629805c3c","Type":"ContainerDied","Data":"d0ae5c4fe9de3e733bb6d41a39af3631f007b44eb0a9860f8198562be7d60a73"} Jan 26 20:59:00 crc kubenswrapper[4899]: I0126 20:59:00.624408 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w86jl" event={"ID":"3080d09d-fb91-4cbf-84fe-2b96c34968ba","Type":"ContainerStarted","Data":"7285a3615ddf0736013b3b35bf3df2676b2d19def04afe698b8f8ef3791a3d34"} Jan 26 20:59:00 crc kubenswrapper[4899]: I0126 20:59:00.630278 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbjp6" event={"ID":"8c4e1101-fd5e-41c2-9d33-e08d7c529c70","Type":"ContainerStarted","Data":"a3f2d2f1dc5d3ce9940eb7ead3be69676fe88b0e9b43baf10c771a469cb6a0f0"} Jan 26 20:59:00 crc kubenswrapper[4899]: I0126 20:59:00.636185 4899 generic.go:334] "Generic (PLEG): container finished" podID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" containerID="66666afd9cced061e8cfb410c3947595345575adaa642a252dc47e39469dcc59" exitCode=0 Jan 26 20:59:00 crc kubenswrapper[4899]: I0126 20:59:00.636221 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n87nz" event={"ID":"0b25cc74-1abf-4d2c-b95f-7179eb518d9c","Type":"ContainerDied","Data":"66666afd9cced061e8cfb410c3947595345575adaa642a252dc47e39469dcc59"} Jan 26 20:59:00 crc kubenswrapper[4899]: I0126 20:59:00.642043 4899 generic.go:334] "Generic (PLEG): container finished" podID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerID="16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4" exitCode=0 Jan 26 20:59:00 crc kubenswrapper[4899]: I0126 20:59:00.642126 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerDied","Data":"16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4"} Jan 26 20:59:00 crc kubenswrapper[4899]: I0126 20:59:00.692406 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qbjp6" podStartSLOduration=2.921518583 podStartE2EDuration="1m15.692380853s" podCreationTimestamp="2026-01-26 20:57:45 +0000 UTC" firstStartedPulling="2026-01-26 20:57:46.879346039 +0000 UTC m=+156.260934076" lastFinishedPulling="2026-01-26 20:58:59.650208279 +0000 UTC m=+229.031796346" observedRunningTime="2026-01-26 20:59:00.690864322 +0000 UTC m=+230.072452359" watchObservedRunningTime="2026-01-26 20:59:00.692380853 +0000 UTC m=+230.073968890" Jan 26 20:59:01 crc kubenswrapper[4899]: I0126 20:59:01.650978 4899 generic.go:334] "Generic (PLEG): container finished" podID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" containerID="7285a3615ddf0736013b3b35bf3df2676b2d19def04afe698b8f8ef3791a3d34" exitCode=0 Jan 26 20:59:01 crc kubenswrapper[4899]: I0126 20:59:01.651040 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w86jl" event={"ID":"3080d09d-fb91-4cbf-84fe-2b96c34968ba","Type":"ContainerDied","Data":"7285a3615ddf0736013b3b35bf3df2676b2d19def04afe698b8f8ef3791a3d34"} Jan 26 20:59:01 crc kubenswrapper[4899]: I0126 20:59:01.657082 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p49cb" event={"ID":"f44aa611-a197-45c2-b4c4-7578006901e1","Type":"ContainerStarted","Data":"84f03000fde827fcc919d4cf4eeab8c6124f43679f6c5f10619b9b0b3f217389"} Jan 26 20:59:01 crc kubenswrapper[4899]: I0126 20:59:01.684506 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerStarted","Data":"3f94e6baab8018d5397a8277f89202396b5fce9952d69ae12adb866883853800"} Jan 26 20:59:01 crc kubenswrapper[4899]: I0126 20:59:01.689236 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-848ms" event={"ID":"0bb3afa9-f123-45d3-817a-e5232b62b483","Type":"ContainerStarted","Data":"cb4bc13a05eb252a06f76d285fe353b5a4be109bc87610b8d0db544ab6e9fccd"} Jan 26 20:59:02 crc kubenswrapper[4899]: I0126 20:59:02.696914 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cqkt" event={"ID":"6e1db38d-09be-44c8-b4d8-636629805c3c","Type":"ContainerStarted","Data":"aa8dbad9196a4f3277f4c5bb943c2bf0ee7c9e75feb06dc2c0a4a52e4cc92681"} Jan 26 20:59:02 crc kubenswrapper[4899]: I0126 20:59:02.698850 4899 generic.go:334] "Generic (PLEG): container finished" podID="f44aa611-a197-45c2-b4c4-7578006901e1" containerID="84f03000fde827fcc919d4cf4eeab8c6124f43679f6c5f10619b9b0b3f217389" exitCode=0 Jan 26 20:59:02 crc kubenswrapper[4899]: I0126 20:59:02.698986 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p49cb" event={"ID":"f44aa611-a197-45c2-b4c4-7578006901e1","Type":"ContainerDied","Data":"84f03000fde827fcc919d4cf4eeab8c6124f43679f6c5f10619b9b0b3f217389"} Jan 26 20:59:02 crc kubenswrapper[4899]: I0126 20:59:02.700739 4899 generic.go:334] "Generic (PLEG): container finished" podID="0bb3afa9-f123-45d3-817a-e5232b62b483" containerID="cb4bc13a05eb252a06f76d285fe353b5a4be109bc87610b8d0db544ab6e9fccd" exitCode=0 Jan 26 20:59:02 crc kubenswrapper[4899]: I0126 20:59:02.700775 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-848ms" event={"ID":"0bb3afa9-f123-45d3-817a-e5232b62b483","Type":"ContainerDied","Data":"cb4bc13a05eb252a06f76d285fe353b5a4be109bc87610b8d0db544ab6e9fccd"} Jan 26 20:59:03 crc kubenswrapper[4899]: E0126 20:59:03.148563 4899 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47abc2e2_8494_4bc8_b946_46cbd5079434.slice/crio-conmon-f8ca15a9f811bb422d7bf57171d8f8acd208fafe2472c59cc8e98c538e76f559.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47abc2e2_8494_4bc8_b946_46cbd5079434.slice/crio-f8ca15a9f811bb422d7bf57171d8f8acd208fafe2472c59cc8e98c538e76f559.scope\": RecentStats: unable to find data in memory cache]" Jan 26 20:59:03 crc kubenswrapper[4899]: I0126 20:59:03.708055 4899 generic.go:334] "Generic (PLEG): container finished" podID="47abc2e2-8494-4bc8-b946-46cbd5079434" containerID="f8ca15a9f811bb422d7bf57171d8f8acd208fafe2472c59cc8e98c538e76f559" exitCode=0 Jan 26 20:59:03 crc kubenswrapper[4899]: I0126 20:59:03.708133 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chl8v" event={"ID":"47abc2e2-8494-4bc8-b946-46cbd5079434","Type":"ContainerDied","Data":"f8ca15a9f811bb422d7bf57171d8f8acd208fafe2472c59cc8e98c538e76f559"} Jan 26 20:59:03 crc kubenswrapper[4899]: I0126 20:59:03.712077 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n87nz" event={"ID":"0b25cc74-1abf-4d2c-b95f-7179eb518d9c","Type":"ContainerStarted","Data":"b51f0708747b6ecccddf98b7ae78a1fba19fa53104d53d75a5d49b5af661acd6"} Jan 26 20:59:03 crc kubenswrapper[4899]: I0126 20:59:03.747521 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n87nz" podStartSLOduration=2.778166195 podStartE2EDuration="1m17.747500643s" podCreationTimestamp="2026-01-26 20:57:46 +0000 UTC" firstStartedPulling="2026-01-26 20:57:47.926062315 +0000 UTC m=+157.307650352" lastFinishedPulling="2026-01-26 20:59:02.895396763 +0000 UTC m=+232.276984800" observedRunningTime="2026-01-26 20:59:03.745376442 +0000 UTC m=+233.126964489" watchObservedRunningTime="2026-01-26 20:59:03.747500643 +0000 UTC m=+233.129088680" Jan 26 20:59:03 crc kubenswrapper[4899]: I0126 20:59:03.764212 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6cqkt" podStartSLOduration=5.208014272 podStartE2EDuration="1m16.764183461s" podCreationTimestamp="2026-01-26 20:57:47 +0000 UTC" firstStartedPulling="2026-01-26 20:57:49.950312671 +0000 UTC m=+159.331900708" lastFinishedPulling="2026-01-26 20:59:01.50648186 +0000 UTC m=+230.888069897" observedRunningTime="2026-01-26 20:59:03.761322576 +0000 UTC m=+233.142910653" watchObservedRunningTime="2026-01-26 20:59:03.764183461 +0000 UTC m=+233.145771538" Jan 26 20:59:05 crc kubenswrapper[4899]: I0126 20:59:05.724799 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chbw8" event={"ID":"858babe5-eeb7-4ab9-a863-68e0c7a61ee7","Type":"ContainerStarted","Data":"a0c1e679782be4f5426c00794893c1ccc075716f3f8f3ad47d96dda7816bcc73"} Jan 26 20:59:05 crc kubenswrapper[4899]: I0126 20:59:05.747444 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-chbw8" podStartSLOduration=2.895407229 podStartE2EDuration="1m17.747422333s" podCreationTimestamp="2026-01-26 20:57:48 +0000 UTC" firstStartedPulling="2026-01-26 20:57:49.973618757 +0000 UTC m=+159.355206794" lastFinishedPulling="2026-01-26 20:59:04.825633871 +0000 UTC m=+234.207221898" observedRunningTime="2026-01-26 20:59:05.744810866 +0000 UTC m=+235.126398903" watchObservedRunningTime="2026-01-26 20:59:05.747422333 +0000 UTC m=+235.129010370" Jan 26 20:59:05 crc kubenswrapper[4899]: I0126 20:59:05.963340 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qbjp6" Jan 26 20:59:05 crc kubenswrapper[4899]: I0126 20:59:05.963432 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qbjp6" Jan 26 20:59:06 crc kubenswrapper[4899]: I0126 20:59:06.286371 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qbjp6" Jan 26 20:59:06 crc kubenswrapper[4899]: I0126 20:59:06.359536 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:59:06 crc kubenswrapper[4899]: I0126 20:59:06.359660 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:59:06 crc kubenswrapper[4899]: I0126 20:59:06.408628 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:59:07 crc kubenswrapper[4899]: I0126 20:59:07.781054 4899 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-qbjp6" podUID="8c4e1101-fd5e-41c2-9d33-e08d7c529c70" containerName="registry-server" probeResult="failure" output=< Jan 26 20:59:07 crc kubenswrapper[4899]: timeout: failed to connect service ":50051" within 1s Jan 26 20:59:07 crc kubenswrapper[4899]: > Jan 26 20:59:07 crc kubenswrapper[4899]: I0126 20:59:07.978531 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 20:59:07 crc kubenswrapper[4899]: I0126 20:59:07.978592 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 20:59:08 crc kubenswrapper[4899]: I0126 20:59:08.048973 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 20:59:08 crc kubenswrapper[4899]: I0126 20:59:08.389692 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:59:08 crc kubenswrapper[4899]: I0126 20:59:08.389732 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:59:08 crc kubenswrapper[4899]: I0126 20:59:08.462853 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:59:08 crc kubenswrapper[4899]: I0126 20:59:08.747726 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p49cb" event={"ID":"f44aa611-a197-45c2-b4c4-7578006901e1","Type":"ContainerStarted","Data":"780d2499945b4b192ff96789c88f72bec11faefecf85e8b976e31b19ce772c48"} Jan 26 20:59:08 crc kubenswrapper[4899]: I0126 20:59:08.790162 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 20:59:10 crc kubenswrapper[4899]: I0126 20:59:10.781225 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p49cb" podStartSLOduration=7.566902021 podStartE2EDuration="1m21.781201332s" podCreationTimestamp="2026-01-26 20:57:49 +0000 UTC" firstStartedPulling="2026-01-26 20:57:51.042284618 +0000 UTC m=+160.423872655" lastFinishedPulling="2026-01-26 20:59:05.256583939 +0000 UTC m=+234.638171966" observedRunningTime="2026-01-26 20:59:10.779581358 +0000 UTC m=+240.161169435" watchObservedRunningTime="2026-01-26 20:59:10.781201332 +0000 UTC m=+240.162789409" Jan 26 20:59:16 crc kubenswrapper[4899]: I0126 20:59:16.006354 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qbjp6" Jan 26 20:59:16 crc kubenswrapper[4899]: I0126 20:59:16.423583 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:59:16 crc kubenswrapper[4899]: I0126 20:59:16.477655 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n87nz"] Jan 26 20:59:16 crc kubenswrapper[4899]: I0126 20:59:16.793650 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n87nz" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" containerName="registry-server" containerID="cri-o://b51f0708747b6ecccddf98b7ae78a1fba19fa53104d53d75a5d49b5af661acd6" gracePeriod=2 Jan 26 20:59:18 crc kubenswrapper[4899]: I0126 20:59:18.426057 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:59:19 crc kubenswrapper[4899]: I0126 20:59:19.655527 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:59:19 crc kubenswrapper[4899]: I0126 20:59:19.655578 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:59:19 crc kubenswrapper[4899]: I0126 20:59:19.732723 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:59:19 crc kubenswrapper[4899]: I0126 20:59:19.869424 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:59:20 crc kubenswrapper[4899]: I0126 20:59:20.816527 4899 generic.go:334] "Generic (PLEG): container finished" podID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" containerID="b51f0708747b6ecccddf98b7ae78a1fba19fa53104d53d75a5d49b5af661acd6" exitCode=0 Jan 26 20:59:20 crc kubenswrapper[4899]: I0126 20:59:20.817591 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n87nz" event={"ID":"0b25cc74-1abf-4d2c-b95f-7179eb518d9c","Type":"ContainerDied","Data":"b51f0708747b6ecccddf98b7ae78a1fba19fa53104d53d75a5d49b5af661acd6"} Jan 26 20:59:20 crc kubenswrapper[4899]: I0126 20:59:20.844382 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-chbw8"] Jan 26 20:59:20 crc kubenswrapper[4899]: I0126 20:59:20.844595 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-chbw8" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" containerName="registry-server" containerID="cri-o://a0c1e679782be4f5426c00794893c1ccc075716f3f8f3ad47d96dda7816bcc73" gracePeriod=2 Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.876525 4899 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 20:59:22 crc kubenswrapper[4899]: E0126 20:59:22.877149 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7207368-5f89-416a-8669-f453e80097e2" containerName="pruner" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.877164 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7207368-5f89-416a-8669-f453e80097e2" containerName="pruner" Jan 26 20:59:22 crc kubenswrapper[4899]: E0126 20:59:22.877180 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a8c64f-5fcd-4868-a4df-82d097332e7b" containerName="pruner" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.877186 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a8c64f-5fcd-4868-a4df-82d097332e7b" containerName="pruner" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.877283 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a8c64f-5fcd-4868-a4df-82d097332e7b" containerName="pruner" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.877301 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7207368-5f89-416a-8669-f453e80097e2" containerName="pruner" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.877656 4899 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.877891 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4" gracePeriod=15 Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.877914 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54" gracePeriod=15 Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.877951 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.878037 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7" gracePeriod=15 Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.878039 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d" gracePeriod=15 Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.878072 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c" gracePeriod=15 Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.879789 4899 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 20:59:22 crc kubenswrapper[4899]: E0126 20:59:22.879997 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.880016 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 20:59:22 crc kubenswrapper[4899]: E0126 20:59:22.880033 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.880041 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 20:59:22 crc kubenswrapper[4899]: E0126 20:59:22.880062 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.880070 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 20:59:22 crc kubenswrapper[4899]: E0126 20:59:22.880079 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.880087 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 20:59:22 crc kubenswrapper[4899]: E0126 20:59:22.880100 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.880107 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 20:59:22 crc kubenswrapper[4899]: E0126 20:59:22.880118 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.880125 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 20:59:22 crc kubenswrapper[4899]: E0126 20:59:22.880137 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.880144 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.880237 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.880248 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.880256 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.880264 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.880275 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.880441 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 20:59:22 crc kubenswrapper[4899]: I0126 20:59:22.916296 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.017083 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.017150 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.017209 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.017405 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.017463 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.017487 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.017530 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.017752 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.045234 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p49cb"] Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.045522 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p49cb" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" containerName="registry-server" containerID="cri-o://780d2499945b4b192ff96789c88f72bec11faefecf85e8b976e31b19ce772c48" gracePeriod=2 Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.119325 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.119577 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.119603 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.119405 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.119633 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.119665 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.119687 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.119694 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.119710 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.119719 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.119729 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.119740 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.119755 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.119768 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.119776 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.119797 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.216334 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.832229 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.833720 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.834419 4899 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c" exitCode=2 Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.837011 4899 generic.go:334] "Generic (PLEG): container finished" podID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" containerID="a0c1e679782be4f5426c00794893c1ccc075716f3f8f3ad47d96dda7816bcc73" exitCode=0 Jan 26 20:59:23 crc kubenswrapper[4899]: I0126 20:59:23.837045 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chbw8" event={"ID":"858babe5-eeb7-4ab9-a863-68e0c7a61ee7","Type":"ContainerDied","Data":"a0c1e679782be4f5426c00794893c1ccc075716f3f8f3ad47d96dda7816bcc73"} Jan 26 20:59:24 crc kubenswrapper[4899]: E0126 20:59:24.799312 4899 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:24 crc kubenswrapper[4899]: E0126 20:59:24.799572 4899 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:24 crc kubenswrapper[4899]: E0126 20:59:24.799829 4899 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:24 crc kubenswrapper[4899]: E0126 20:59:24.800133 4899 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:24 crc kubenswrapper[4899]: E0126 20:59:24.800374 4899 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:24 crc kubenswrapper[4899]: I0126 20:59:24.800393 4899 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 20:59:24 crc kubenswrapper[4899]: E0126 20:59:24.800543 4899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="200ms" Jan 26 20:59:25 crc kubenswrapper[4899]: E0126 20:59:25.002036 4899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="400ms" Jan 26 20:59:25 crc kubenswrapper[4899]: E0126 20:59:25.402577 4899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="800ms" Jan 26 20:59:26 crc kubenswrapper[4899]: E0126 20:59:26.203484 4899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="1.6s" Jan 26 20:59:26 crc kubenswrapper[4899]: E0126 20:59:26.359812 4899 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b51f0708747b6ecccddf98b7ae78a1fba19fa53104d53d75a5d49b5af661acd6 is running failed: container process not found" containerID="b51f0708747b6ecccddf98b7ae78a1fba19fa53104d53d75a5d49b5af661acd6" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 20:59:26 crc kubenswrapper[4899]: E0126 20:59:26.360125 4899 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b51f0708747b6ecccddf98b7ae78a1fba19fa53104d53d75a5d49b5af661acd6 is running failed: container process not found" containerID="b51f0708747b6ecccddf98b7ae78a1fba19fa53104d53d75a5d49b5af661acd6" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 20:59:26 crc kubenswrapper[4899]: E0126 20:59:26.360442 4899 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b51f0708747b6ecccddf98b7ae78a1fba19fa53104d53d75a5d49b5af661acd6 is running failed: container process not found" containerID="b51f0708747b6ecccddf98b7ae78a1fba19fa53104d53d75a5d49b5af661acd6" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 20:59:26 crc kubenswrapper[4899]: E0126 20:59:26.360499 4899 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b51f0708747b6ecccddf98b7ae78a1fba19fa53104d53d75a5d49b5af661acd6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-n87nz" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" containerName="registry-server" Jan 26 20:59:26 crc kubenswrapper[4899]: E0126 20:59:26.360825 4899 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.22:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-n87nz.188e638e48c48f60 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-n87nz,UID:0b25cc74-1abf-4d2c-b95f-7179eb518d9c,APIVersion:v1,ResourceVersion:28422,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of b51f0708747b6ecccddf98b7ae78a1fba19fa53104d53d75a5d49b5af661acd6 is running failed: container process not found,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 20:59:26.360530784 +0000 UTC m=+255.742118821,LastTimestamp:2026-01-26 20:59:26.360530784 +0000 UTC m=+255.742118821,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 20:59:26 crc kubenswrapper[4899]: I0126 20:59:26.853423 4899 generic.go:334] "Generic (PLEG): container finished" podID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" containerID="a36cdae073fac2ab9c07e9c2543723b1a413aaac95e12ea9a935376a11e3c989" exitCode=0 Jan 26 20:59:26 crc kubenswrapper[4899]: I0126 20:59:26.853468 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bb6f8e1b-1528-4285-ab7f-2808df5f1b29","Type":"ContainerDied","Data":"a36cdae073fac2ab9c07e9c2543723b1a413aaac95e12ea9a935376a11e3c989"} Jan 26 20:59:26 crc kubenswrapper[4899]: I0126 20:59:26.854359 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:27 crc kubenswrapper[4899]: E0126 20:59:27.806230 4899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="3.2s" Jan 26 20:59:27 crc kubenswrapper[4899]: I0126 20:59:27.861100 4899 generic.go:334] "Generic (PLEG): container finished" podID="f44aa611-a197-45c2-b4c4-7578006901e1" containerID="780d2499945b4b192ff96789c88f72bec11faefecf85e8b976e31b19ce772c48" exitCode=0 Jan 26 20:59:27 crc kubenswrapper[4899]: I0126 20:59:27.861170 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p49cb" event={"ID":"f44aa611-a197-45c2-b4c4-7578006901e1","Type":"ContainerDied","Data":"780d2499945b4b192ff96789c88f72bec11faefecf85e8b976e31b19ce772c48"} Jan 26 20:59:27 crc kubenswrapper[4899]: I0126 20:59:27.863598 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 20:59:27 crc kubenswrapper[4899]: I0126 20:59:27.864998 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 20:59:27 crc kubenswrapper[4899]: I0126 20:59:27.865735 4899 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7" exitCode=0 Jan 26 20:59:28 crc kubenswrapper[4899]: E0126 20:59:28.390019 4899 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a0c1e679782be4f5426c00794893c1ccc075716f3f8f3ad47d96dda7816bcc73 is running failed: container process not found" containerID="a0c1e679782be4f5426c00794893c1ccc075716f3f8f3ad47d96dda7816bcc73" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 20:59:28 crc kubenswrapper[4899]: E0126 20:59:28.390982 4899 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a0c1e679782be4f5426c00794893c1ccc075716f3f8f3ad47d96dda7816bcc73 is running failed: container process not found" containerID="a0c1e679782be4f5426c00794893c1ccc075716f3f8f3ad47d96dda7816bcc73" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 20:59:28 crc kubenswrapper[4899]: E0126 20:59:28.391521 4899 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a0c1e679782be4f5426c00794893c1ccc075716f3f8f3ad47d96dda7816bcc73 is running failed: container process not found" containerID="a0c1e679782be4f5426c00794893c1ccc075716f3f8f3ad47d96dda7816bcc73" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 20:59:28 crc kubenswrapper[4899]: E0126 20:59:28.391585 4899 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a0c1e679782be4f5426c00794893c1ccc075716f3f8f3ad47d96dda7816bcc73 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-chbw8" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" containerName="registry-server" Jan 26 20:59:28 crc kubenswrapper[4899]: I0126 20:59:28.870858 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 20:59:28 crc kubenswrapper[4899]: I0126 20:59:28.872491 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 20:59:28 crc kubenswrapper[4899]: I0126 20:59:28.873280 4899 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54" exitCode=0 Jan 26 20:59:28 crc kubenswrapper[4899]: I0126 20:59:28.873311 4899 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d" exitCode=0 Jan 26 20:59:28 crc kubenswrapper[4899]: I0126 20:59:28.873343 4899 scope.go:117] "RemoveContainer" containerID="4a1a400f8da23473898d65f94e813d18f0cad09b84fd070af666d7399bfc4073" Jan 26 20:59:28 crc kubenswrapper[4899]: E0126 20:59:28.893142 4899 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.22:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-n87nz.188e638e48c48f60 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-n87nz,UID:0b25cc74-1abf-4d2c-b95f-7179eb518d9c,APIVersion:v1,ResourceVersion:28422,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of b51f0708747b6ecccddf98b7ae78a1fba19fa53104d53d75a5d49b5af661acd6 is running failed: container process not found,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 20:59:26.360530784 +0000 UTC m=+255.742118821,LastTimestamp:2026-01-26 20:59:26.360530784 +0000 UTC m=+255.742118821,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 20:59:29 crc kubenswrapper[4899]: E0126 20:59:29.657110 4899 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 780d2499945b4b192ff96789c88f72bec11faefecf85e8b976e31b19ce772c48 is running failed: container process not found" containerID="780d2499945b4b192ff96789c88f72bec11faefecf85e8b976e31b19ce772c48" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 20:59:29 crc kubenswrapper[4899]: E0126 20:59:29.657855 4899 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 780d2499945b4b192ff96789c88f72bec11faefecf85e8b976e31b19ce772c48 is running failed: container process not found" containerID="780d2499945b4b192ff96789c88f72bec11faefecf85e8b976e31b19ce772c48" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 20:59:29 crc kubenswrapper[4899]: E0126 20:59:29.658453 4899 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 780d2499945b4b192ff96789c88f72bec11faefecf85e8b976e31b19ce772c48 is running failed: container process not found" containerID="780d2499945b4b192ff96789c88f72bec11faefecf85e8b976e31b19ce772c48" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 20:59:29 crc kubenswrapper[4899]: E0126 20:59:29.658499 4899 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 780d2499945b4b192ff96789c88f72bec11faefecf85e8b976e31b19ce772c48 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-p49cb" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" containerName="registry-server" Jan 26 20:59:29 crc kubenswrapper[4899]: I0126 20:59:29.884693 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 20:59:29 crc kubenswrapper[4899]: I0126 20:59:29.885610 4899 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4" exitCode=0 Jan 26 20:59:30 crc kubenswrapper[4899]: I0126 20:59:30.935909 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:31 crc kubenswrapper[4899]: E0126 20:59:31.007774 4899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="6.4s" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.425162 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.426047 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.432574 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.433428 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.433786 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.436653 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.437121 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.437510 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.438004 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.480647 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-kube-api-access\") pod \"bb6f8e1b-1528-4285-ab7f-2808df5f1b29\" (UID: \"bb6f8e1b-1528-4285-ab7f-2808df5f1b29\") " Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.481010 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-var-lock\") pod \"bb6f8e1b-1528-4285-ab7f-2808df5f1b29\" (UID: \"bb6f8e1b-1528-4285-ab7f-2808df5f1b29\") " Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.481154 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8vw6\" (UniqueName: \"kubernetes.io/projected/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-kube-api-access-g8vw6\") pod \"0b25cc74-1abf-4d2c-b95f-7179eb518d9c\" (UID: \"0b25cc74-1abf-4d2c-b95f-7179eb518d9c\") " Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.481245 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-utilities\") pod \"858babe5-eeb7-4ab9-a863-68e0c7a61ee7\" (UID: \"858babe5-eeb7-4ab9-a863-68e0c7a61ee7\") " Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.481375 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-catalog-content\") pod \"0b25cc74-1abf-4d2c-b95f-7179eb518d9c\" (UID: \"0b25cc74-1abf-4d2c-b95f-7179eb518d9c\") " Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.481476 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zckm7\" (UniqueName: \"kubernetes.io/projected/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-kube-api-access-zckm7\") pod \"858babe5-eeb7-4ab9-a863-68e0c7a61ee7\" (UID: \"858babe5-eeb7-4ab9-a863-68e0c7a61ee7\") " Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.481561 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-kubelet-dir\") pod \"bb6f8e1b-1528-4285-ab7f-2808df5f1b29\" (UID: \"bb6f8e1b-1528-4285-ab7f-2808df5f1b29\") " Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.481678 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-catalog-content\") pod \"858babe5-eeb7-4ab9-a863-68e0c7a61ee7\" (UID: \"858babe5-eeb7-4ab9-a863-68e0c7a61ee7\") " Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.483139 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-utilities\") pod \"0b25cc74-1abf-4d2c-b95f-7179eb518d9c\" (UID: \"0b25cc74-1abf-4d2c-b95f-7179eb518d9c\") " Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.481060 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-var-lock" (OuterVolumeSpecName: "var-lock") pod "bb6f8e1b-1528-4285-ab7f-2808df5f1b29" (UID: "bb6f8e1b-1528-4285-ab7f-2808df5f1b29"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.481617 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bb6f8e1b-1528-4285-ab7f-2808df5f1b29" (UID: "bb6f8e1b-1528-4285-ab7f-2808df5f1b29"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.482475 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-utilities" (OuterVolumeSpecName: "utilities") pod "858babe5-eeb7-4ab9-a863-68e0c7a61ee7" (UID: "858babe5-eeb7-4ab9-a863-68e0c7a61ee7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.484292 4899 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.486249 4899 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.486287 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.484402 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-utilities" (OuterVolumeSpecName: "utilities") pod "0b25cc74-1abf-4d2c-b95f-7179eb518d9c" (UID: "0b25cc74-1abf-4d2c-b95f-7179eb518d9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.486495 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-kube-api-access-g8vw6" (OuterVolumeSpecName: "kube-api-access-g8vw6") pod "0b25cc74-1abf-4d2c-b95f-7179eb518d9c" (UID: "0b25cc74-1abf-4d2c-b95f-7179eb518d9c"). InnerVolumeSpecName "kube-api-access-g8vw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.487373 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-kube-api-access-zckm7" (OuterVolumeSpecName: "kube-api-access-zckm7") pod "858babe5-eeb7-4ab9-a863-68e0c7a61ee7" (UID: "858babe5-eeb7-4ab9-a863-68e0c7a61ee7"). InnerVolumeSpecName "kube-api-access-zckm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.487436 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bb6f8e1b-1528-4285-ab7f-2808df5f1b29" (UID: "bb6f8e1b-1528-4285-ab7f-2808df5f1b29"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.587289 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.587351 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb6f8e1b-1528-4285-ab7f-2808df5f1b29-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.587367 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8vw6\" (UniqueName: \"kubernetes.io/projected/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-kube-api-access-g8vw6\") on node \"crc\" DevicePath \"\"" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.587382 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zckm7\" (UniqueName: \"kubernetes.io/projected/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-kube-api-access-zckm7\") on node \"crc\" DevicePath \"\"" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.677881 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "858babe5-eeb7-4ab9-a863-68e0c7a61ee7" (UID: "858babe5-eeb7-4ab9-a863-68e0c7a61ee7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.688626 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/858babe5-eeb7-4ab9-a863-68e0c7a61ee7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.725038 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b25cc74-1abf-4d2c-b95f-7179eb518d9c" (UID: "0b25cc74-1abf-4d2c-b95f-7179eb518d9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:59:33 crc kubenswrapper[4899]: I0126 20:59:33.789432 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b25cc74-1abf-4d2c-b95f-7179eb518d9c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.191848 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n87nz" event={"ID":"0b25cc74-1abf-4d2c-b95f-7179eb518d9c","Type":"ContainerDied","Data":"b604546d8ecdaacbc60fd122edeafd6b21c80ce7d4e63f3ff4ffa5e570c1fab4"} Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.192032 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n87nz" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.193443 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.193708 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.193978 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.194014 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bb6f8e1b-1528-4285-ab7f-2808df5f1b29","Type":"ContainerDied","Data":"3437690d0a5f35353a01ddc0d1b1a9a257d44a2623c7baa3203976b1670d5e03"} Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.194007 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.194054 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3437690d0a5f35353a01ddc0d1b1a9a257d44a2623c7baa3203976b1670d5e03" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.198239 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chbw8" event={"ID":"858babe5-eeb7-4ab9-a863-68e0c7a61ee7","Type":"ContainerDied","Data":"8755935fb1d21df6401188ea5a53c91ac91fcfb6962b61cff26aeb441c8849cb"} Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.198303 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chbw8" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.199083 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.199578 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.200083 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.208628 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.209267 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.209685 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.214458 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.214874 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.215472 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.225135 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.226098 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.226991 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.939666 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.940442 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.940874 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.941203 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:34 crc kubenswrapper[4899]: I0126 20:59:34.941430 4899 status_manager.go:851] "Failed to get status for pod" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" pod="openshift-marketplace/redhat-operators-p49cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p49cb\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.004240 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f44aa611-a197-45c2-b4c4-7578006901e1-catalog-content\") pod \"f44aa611-a197-45c2-b4c4-7578006901e1\" (UID: \"f44aa611-a197-45c2-b4c4-7578006901e1\") " Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.004369 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f44aa611-a197-45c2-b4c4-7578006901e1-utilities\") pod \"f44aa611-a197-45c2-b4c4-7578006901e1\" (UID: \"f44aa611-a197-45c2-b4c4-7578006901e1\") " Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.004520 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt7gs\" (UniqueName: \"kubernetes.io/projected/f44aa611-a197-45c2-b4c4-7578006901e1-kube-api-access-dt7gs\") pod \"f44aa611-a197-45c2-b4c4-7578006901e1\" (UID: \"f44aa611-a197-45c2-b4c4-7578006901e1\") " Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.005734 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f44aa611-a197-45c2-b4c4-7578006901e1-utilities" (OuterVolumeSpecName: "utilities") pod "f44aa611-a197-45c2-b4c4-7578006901e1" (UID: "f44aa611-a197-45c2-b4c4-7578006901e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.011093 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f44aa611-a197-45c2-b4c4-7578006901e1-kube-api-access-dt7gs" (OuterVolumeSpecName: "kube-api-access-dt7gs") pod "f44aa611-a197-45c2-b4c4-7578006901e1" (UID: "f44aa611-a197-45c2-b4c4-7578006901e1"). InnerVolumeSpecName "kube-api-access-dt7gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.021339 4899 scope.go:117] "RemoveContainer" containerID="b51f0708747b6ecccddf98b7ae78a1fba19fa53104d53d75a5d49b5af661acd6" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.055978 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.059665 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.060182 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.060443 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.060720 4899 status_manager.go:851] "Failed to get status for pod" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" pod="openshift-marketplace/redhat-operators-p49cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p49cb\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.061084 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.061286 4899 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.070736 4899 scope.go:117] "RemoveContainer" containerID="66666afd9cced061e8cfb410c3947595345575adaa642a252dc47e39469dcc59" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.105344 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.105429 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.105448 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.105464 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.105487 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.105584 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.105754 4899 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.105775 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt7gs\" (UniqueName: \"kubernetes.io/projected/f44aa611-a197-45c2-b4c4-7578006901e1-kube-api-access-dt7gs\") on node \"crc\" DevicePath \"\"" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.105786 4899 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.105797 4899 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.105809 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f44aa611-a197-45c2-b4c4-7578006901e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.106600 4899 scope.go:117] "RemoveContainer" containerID="9ff4e49b06d34aaccd8f73e7928a653908af141c0ef2bcc8e15fd84d86b1e30f" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.128122 4899 scope.go:117] "RemoveContainer" containerID="a0c1e679782be4f5426c00794893c1ccc075716f3f8f3ad47d96dda7816bcc73" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.144778 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f44aa611-a197-45c2-b4c4-7578006901e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f44aa611-a197-45c2-b4c4-7578006901e1" (UID: "f44aa611-a197-45c2-b4c4-7578006901e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.152152 4899 scope.go:117] "RemoveContainer" containerID="45628208349d6ceffcc04ac020d84a69252f63e9613bbf6cd62cf799e92f897b" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.172773 4899 scope.go:117] "RemoveContainer" containerID="e5b0b696ffee92520077d5293ed7ea1986f381da6ad03bb2aebfbcda74e2d79b" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.206520 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1ef05810db66c72c62ac86d6ce6f9b62ff28dba8fc9a044b7d8966c6bf33bc91"} Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.206792 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f44aa611-a197-45c2-b4c4-7578006901e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.227482 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-848ms" event={"ID":"0bb3afa9-f123-45d3-817a-e5232b62b483","Type":"ContainerStarted","Data":"abbb31daba2bf2a710f97e03756d01c148118199c1cddf9bd6f5bfa3001ae0d9"} Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.228404 4899 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.228620 4899 status_manager.go:851] "Failed to get status for pod" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" pod="openshift-marketplace/redhat-operators-848ms" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-848ms\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.228895 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.229405 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.229872 4899 status_manager.go:851] "Failed to get status for pod" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" pod="openshift-marketplace/redhat-operators-p49cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p49cb\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.230290 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.231158 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chl8v" event={"ID":"47abc2e2-8494-4bc8-b946-46cbd5079434","Type":"ContainerStarted","Data":"d8f9fa5a9f29238e1b69999fd2387fe105142bc4361630667efcdd11b524d2c4"} Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.232003 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.233051 4899 status_manager.go:851] "Failed to get status for pod" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" pod="openshift-marketplace/redhat-operators-p49cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p49cb\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.233311 4899 status_manager.go:851] "Failed to get status for pod" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" pod="openshift-marketplace/certified-operators-chl8v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chl8v\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.233661 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.233902 4899 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.234202 4899 status_manager.go:851] "Failed to get status for pod" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" pod="openshift-marketplace/redhat-operators-848ms" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-848ms\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.234425 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.252944 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p49cb" event={"ID":"f44aa611-a197-45c2-b4c4-7578006901e1","Type":"ContainerDied","Data":"3e7f88f8617d71a83d6c9ef46a2d6068ca88038a22e3ed29d0632bc5176aa831"} Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.252993 4899 scope.go:117] "RemoveContainer" containerID="780d2499945b4b192ff96789c88f72bec11faefecf85e8b976e31b19ce772c48" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.253078 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p49cb" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.256133 4899 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.256739 4899 status_manager.go:851] "Failed to get status for pod" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" pod="openshift-marketplace/redhat-operators-848ms" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-848ms\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.257115 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.257471 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.258169 4899 status_manager.go:851] "Failed to get status for pod" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" pod="openshift-marketplace/redhat-operators-p49cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p49cb\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.258795 4899 status_manager.go:851] "Failed to get status for pod" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" pod="openshift-marketplace/certified-operators-chl8v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chl8v\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.259062 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.263525 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.265393 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.273081 4899 scope.go:117] "RemoveContainer" containerID="84f03000fde827fcc919d4cf4eeab8c6124f43679f6c5f10619b9b0b3f217389" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.276325 4899 status_manager.go:851] "Failed to get status for pod" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" pod="openshift-marketplace/certified-operators-chl8v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chl8v\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.276634 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.277102 4899 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.277396 4899 status_manager.go:851] "Failed to get status for pod" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" pod="openshift-marketplace/redhat-operators-848ms" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-848ms\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.277635 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.277906 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.280277 4899 status_manager.go:851] "Failed to get status for pod" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" pod="openshift-marketplace/redhat-operators-p49cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p49cb\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.288642 4899 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.289057 4899 status_manager.go:851] "Failed to get status for pod" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" pod="openshift-marketplace/redhat-operators-848ms" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-848ms\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.289451 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.289829 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.290245 4899 status_manager.go:851] "Failed to get status for pod" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" pod="openshift-marketplace/redhat-operators-p49cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p49cb\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.290501 4899 status_manager.go:851] "Failed to get status for pod" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" pod="openshift-marketplace/certified-operators-chl8v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chl8v\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.290741 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.303690 4899 scope.go:117] "RemoveContainer" containerID="2ef4af0028480394b65fcb965b196de4876ca75758e59d93965dac1a98b608d6" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.322324 4899 scope.go:117] "RemoveContainer" containerID="7d23572451e324fc2a9d0499ccd70fca746bfbe6bc4d518fb96161e7ea8d4b54" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.355473 4899 scope.go:117] "RemoveContainer" containerID="a2967eeaed7d856964b623093ec2c1eb5db35a62cdf473c0aa06a161119525f7" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.367693 4899 scope.go:117] "RemoveContainer" containerID="1b6975946b8c61c93bd3c5b7790779b7a4fba78e0425316b26e4791c4543e67d" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.386113 4899 scope.go:117] "RemoveContainer" containerID="ac65fb0a84888187d76653638a40d81e3ff912d6c5a258e2bb531e9154baf09c" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.400100 4899 scope.go:117] "RemoveContainer" containerID="4c7d5a64881a18998826d7672e416a3f6822da3e611fbb5475e180ad684d56c4" Jan 26 20:59:35 crc kubenswrapper[4899]: I0126 20:59:35.417245 4899 scope.go:117] "RemoveContainer" containerID="7a5b63813ab6b426d82cde65178678526ad7271a2cce349e3a067fe6310a6d5d" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.177943 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-chl8v" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.178031 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-chl8v" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.282340 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w86jl" event={"ID":"3080d09d-fb91-4cbf-84fe-2b96c34968ba","Type":"ContainerStarted","Data":"08fcb5a02c48304bd43441f9dedd0e2e175313ccc065df40c0c6e050a91a14df"} Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.283115 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.283356 4899 status_manager.go:851] "Failed to get status for pod" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" pod="openshift-marketplace/certified-operators-chl8v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chl8v\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.283755 4899 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.284246 4899 status_manager.go:851] "Failed to get status for pod" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" pod="openshift-marketplace/redhat-operators-848ms" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-848ms\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.284528 4899 status_manager.go:851] "Failed to get status for pod" podUID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" pod="openshift-marketplace/certified-operators-w86jl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w86jl\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.284806 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.285120 4899 status_manager.go:851] "Failed to get status for pod" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" pod="openshift-marketplace/redhat-operators-p49cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p49cb\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.285456 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.287537 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f772e96346cbbdacf9fa8c42582a80e9cf2f4d44009e7c8efe609192306b9a73"} Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.288346 4899 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.288695 4899 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.289156 4899 status_manager.go:851] "Failed to get status for pod" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" pod="openshift-marketplace/redhat-operators-848ms" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-848ms\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.289387 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.289735 4899 status_manager.go:851] "Failed to get status for pod" podUID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" pod="openshift-marketplace/certified-operators-w86jl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w86jl\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.290093 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.290287 4899 status_manager.go:851] "Failed to get status for pod" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" pod="openshift-marketplace/redhat-operators-p49cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p49cb\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.290443 4899 status_manager.go:851] "Failed to get status for pod" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" pod="openshift-marketplace/certified-operators-chl8v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chl8v\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.290648 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.930451 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.931463 4899 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.932683 4899 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.940635 4899 status_manager.go:851] "Failed to get status for pod" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" pod="openshift-marketplace/redhat-operators-848ms" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-848ms\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.941434 4899 status_manager.go:851] "Failed to get status for pod" podUID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" pod="openshift-marketplace/certified-operators-w86jl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w86jl\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.941884 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.942235 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.943312 4899 status_manager.go:851] "Failed to get status for pod" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" pod="openshift-marketplace/redhat-operators-p49cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p49cb\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.944418 4899 status_manager.go:851] "Failed to get status for pod" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" pod="openshift-marketplace/certified-operators-chl8v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chl8v\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.945362 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.948781 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.961610 4899 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.961660 4899 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf" Jan 26 20:59:36 crc kubenswrapper[4899]: E0126 20:59:36.962464 4899 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:36 crc kubenswrapper[4899]: I0126 20:59:36.963185 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:36 crc kubenswrapper[4899]: W0126 20:59:36.984351 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-1b7540bebd263c72eac502d377e214375a10deff8a4ea97ce1d660261d4977f2 WatchSource:0}: Error finding container 1b7540bebd263c72eac502d377e214375a10deff8a4ea97ce1d660261d4977f2: Status 404 returned error can't find the container with id 1b7540bebd263c72eac502d377e214375a10deff8a4ea97ce1d660261d4977f2 Jan 26 20:59:37 crc kubenswrapper[4899]: I0126 20:59:37.229512 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-chl8v" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" containerName="registry-server" probeResult="failure" output=< Jan 26 20:59:37 crc kubenswrapper[4899]: timeout: failed to connect service ":50051" within 1s Jan 26 20:59:37 crc kubenswrapper[4899]: > Jan 26 20:59:37 crc kubenswrapper[4899]: I0126 20:59:37.294750 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1b7540bebd263c72eac502d377e214375a10deff8a4ea97ce1d660261d4977f2"} Jan 26 20:59:37 crc kubenswrapper[4899]: E0126 20:59:37.408509 4899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="7s" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.302712 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.303007 4899 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af" exitCode=1 Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.303070 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af"} Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.303480 4899 scope.go:117] "RemoveContainer" containerID="114da2d139311865b1f72689336cf2b820621efe2bc62e0f431d2db91bcf24af" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.303782 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.304135 4899 status_manager.go:851] "Failed to get status for pod" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" pod="openshift-marketplace/redhat-operators-p49cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p49cb\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.304374 4899 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="695983481d9a89df38c93faa6a995d5e224ff3a46fe67974ff954f484e4c6b88" exitCode=0 Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.304379 4899 status_manager.go:851] "Failed to get status for pod" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" pod="openshift-marketplace/certified-operators-chl8v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chl8v\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.304409 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"695983481d9a89df38c93faa6a995d5e224ff3a46fe67974ff954f484e4c6b88"} Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.304622 4899 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.304638 4899 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.304769 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: E0126 20:59:38.304886 4899 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.305064 4899 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.305280 4899 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.305553 4899 status_manager.go:851] "Failed to get status for pod" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" pod="openshift-marketplace/redhat-operators-848ms" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-848ms\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.305797 4899 status_manager.go:851] "Failed to get status for pod" podUID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" pod="openshift-marketplace/certified-operators-w86jl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w86jl\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.306023 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.306329 4899 status_manager.go:851] "Failed to get status for pod" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" pod="openshift-marketplace/certified-operators-chl8v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chl8v\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.306618 4899 status_manager.go:851] "Failed to get status for pod" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.306941 4899 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.307209 4899 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.307581 4899 status_manager.go:851] "Failed to get status for pod" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" pod="openshift-marketplace/redhat-operators-848ms" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-848ms\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.307810 4899 status_manager.go:851] "Failed to get status for pod" podUID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" pod="openshift-marketplace/certified-operators-w86jl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w86jl\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.308289 4899 status_manager.go:851] "Failed to get status for pod" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" pod="openshift-marketplace/redhat-marketplace-chbw8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-chbw8\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.308498 4899 status_manager.go:851] "Failed to get status for pod" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" pod="openshift-marketplace/community-operators-n87nz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n87nz\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.308729 4899 status_manager.go:851] "Failed to get status for pod" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" pod="openshift-marketplace/redhat-operators-p49cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p49cb\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.968889 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-848ms" Jan 26 20:59:38 crc kubenswrapper[4899]: I0126 20:59:38.969656 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-848ms" Jan 26 20:59:39 crc kubenswrapper[4899]: I0126 20:59:39.312332 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 20:59:39 crc kubenswrapper[4899]: I0126 20:59:39.312435 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2753090d5da847b75c003ee0732d845e02cd929a18ce43352e9c3e75fc747942"} Jan 26 20:59:39 crc kubenswrapper[4899]: I0126 20:59:39.316010 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f10d42313013594844f1f6c857f9d1eebe1f7a03b0db4d185cd57ce09440e8f1"} Jan 26 20:59:39 crc kubenswrapper[4899]: I0126 20:59:39.316048 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"00b974ba9c325b7a39030a91a9927fc5653ac26d33414ee707a9ecef0986c26c"} Jan 26 20:59:39 crc kubenswrapper[4899]: I0126 20:59:39.316062 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f5721615a3f73134b279c0417637981dbabe57edd2dbcdcdeacab09a6df144b2"} Jan 26 20:59:40 crc kubenswrapper[4899]: I0126 20:59:40.024300 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-848ms" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" containerName="registry-server" probeResult="failure" output=< Jan 26 20:59:40 crc kubenswrapper[4899]: timeout: failed to connect service ":50051" within 1s Jan 26 20:59:40 crc kubenswrapper[4899]: > Jan 26 20:59:40 crc kubenswrapper[4899]: I0126 20:59:40.326306 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a326f8d7f6e8668956bb2104f802580839e6be3fcd6ae5ada1980d013ecb42c0"} Jan 26 20:59:40 crc kubenswrapper[4899]: I0126 20:59:40.326366 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"84093cfc616648822f3f4a49eb0f04132f2422ea0fca6bcd86e14bb808f491e1"} Jan 26 20:59:40 crc kubenswrapper[4899]: I0126 20:59:40.326510 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:40 crc kubenswrapper[4899]: I0126 20:59:40.326662 4899 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf" Jan 26 20:59:40 crc kubenswrapper[4899]: I0126 20:59:40.326694 4899 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf" Jan 26 20:59:40 crc kubenswrapper[4899]: I0126 20:59:40.432957 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:59:40 crc kubenswrapper[4899]: I0126 20:59:40.441032 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:59:41 crc kubenswrapper[4899]: I0126 20:59:41.331622 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:59:41 crc kubenswrapper[4899]: I0126 20:59:41.963884 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:41 crc kubenswrapper[4899]: I0126 20:59:41.964020 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:41 crc kubenswrapper[4899]: I0126 20:59:41.968907 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:45 crc kubenswrapper[4899]: I0126 20:59:45.649157 4899 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 20:59:45 crc kubenswrapper[4899]: I0126 20:59:45.881619 4899 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f99b752f-00a7-4a01-a2dd-956818079f6a" Jan 26 20:59:46 crc kubenswrapper[4899]: I0126 20:59:46.050203 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w86jl" Jan 26 20:59:46 crc kubenswrapper[4899]: I0126 20:59:46.050245 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w86jl" Jan 26 20:59:46 crc kubenswrapper[4899]: I0126 20:59:46.093892 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w86jl" Jan 26 20:59:46 crc kubenswrapper[4899]: I0126 20:59:46.226074 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-chl8v" Jan 26 20:59:46 crc kubenswrapper[4899]: I0126 20:59:46.272712 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-chl8v" Jan 26 20:59:46 crc kubenswrapper[4899]: I0126 20:59:46.359810 4899 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf" Jan 26 20:59:46 crc kubenswrapper[4899]: I0126 20:59:46.360082 4899 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf" Jan 26 20:59:46 crc kubenswrapper[4899]: I0126 20:59:46.363844 4899 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f99b752f-00a7-4a01-a2dd-956818079f6a" Jan 26 20:59:46 crc kubenswrapper[4899]: I0126 20:59:46.398685 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w86jl" Jan 26 20:59:49 crc kubenswrapper[4899]: I0126 20:59:49.008890 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-848ms" Jan 26 20:59:49 crc kubenswrapper[4899]: I0126 20:59:49.052320 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-848ms" Jan 26 20:59:52 crc kubenswrapper[4899]: I0126 20:59:52.543044 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 20:59:55 crc kubenswrapper[4899]: I0126 20:59:55.466771 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 20:59:55 crc kubenswrapper[4899]: I0126 20:59:55.802151 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 20:59:55 crc kubenswrapper[4899]: I0126 20:59:55.840818 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 20:59:56 crc kubenswrapper[4899]: I0126 20:59:56.033987 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 20:59:56 crc kubenswrapper[4899]: I0126 20:59:56.124544 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 20:59:56 crc kubenswrapper[4899]: I0126 20:59:56.269543 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 20:59:56 crc kubenswrapper[4899]: I0126 20:59:56.474653 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 20:59:56 crc kubenswrapper[4899]: I0126 20:59:56.488898 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 20:59:57 crc kubenswrapper[4899]: I0126 20:59:57.004300 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 20:59:57 crc kubenswrapper[4899]: I0126 20:59:57.044863 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 20:59:57 crc kubenswrapper[4899]: I0126 20:59:57.150291 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 20:59:57 crc kubenswrapper[4899]: I0126 20:59:57.157677 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 20:59:57 crc kubenswrapper[4899]: I0126 20:59:57.161351 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 20:59:57 crc kubenswrapper[4899]: I0126 20:59:57.223396 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 20:59:57 crc kubenswrapper[4899]: I0126 20:59:57.241665 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 20:59:57 crc kubenswrapper[4899]: I0126 20:59:57.311607 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 20:59:57 crc kubenswrapper[4899]: I0126 20:59:57.443190 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 20:59:57 crc kubenswrapper[4899]: I0126 20:59:57.507243 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 20:59:57 crc kubenswrapper[4899]: I0126 20:59:57.548713 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 20:59:57 crc kubenswrapper[4899]: I0126 20:59:57.822618 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 20:59:58 crc kubenswrapper[4899]: I0126 20:59:58.004162 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 20:59:58 crc kubenswrapper[4899]: I0126 20:59:58.038813 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 20:59:58 crc kubenswrapper[4899]: I0126 20:59:58.351045 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 20:59:58 crc kubenswrapper[4899]: I0126 20:59:58.472875 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 20:59:58 crc kubenswrapper[4899]: I0126 20:59:58.486747 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 20:59:58 crc kubenswrapper[4899]: I0126 20:59:58.576638 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 20:59:58 crc kubenswrapper[4899]: I0126 20:59:58.594269 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 20:59:58 crc kubenswrapper[4899]: I0126 20:59:58.630506 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 20:59:58 crc kubenswrapper[4899]: I0126 20:59:58.717408 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 20:59:58 crc kubenswrapper[4899]: I0126 20:59:58.805905 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 20:59:58 crc kubenswrapper[4899]: I0126 20:59:58.806863 4899 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 20:59:58 crc kubenswrapper[4899]: I0126 20:59:58.945064 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 20:59:58 crc kubenswrapper[4899]: I0126 20:59:58.978188 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 20:59:58 crc kubenswrapper[4899]: I0126 20:59:58.980069 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.096401 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.204367 4899 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.261374 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.284352 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.298170 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.310111 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.331266 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.352009 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.436341 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.461099 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.502230 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.525058 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.563542 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.589775 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.593559 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.667732 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.685354 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.688244 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.688443 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.696735 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.755586 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.855091 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.857531 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.898059 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.957666 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 20:59:59 crc kubenswrapper[4899]: I0126 20:59:59.967414 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.067216 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.123856 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.176224 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.223534 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.256561 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.263007 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.279488 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.430457 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.476346 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.476845 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.520079 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.561188 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.574086 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.598420 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.698593 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.744968 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.791134 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.818365 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.842608 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.900888 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.955214 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.962146 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 21:00:00 crc kubenswrapper[4899]: I0126 21:00:00.980739 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 21:00:01 crc kubenswrapper[4899]: I0126 21:00:01.004337 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 21:00:01 crc kubenswrapper[4899]: I0126 21:00:01.014097 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 21:00:01 crc kubenswrapper[4899]: I0126 21:00:01.077913 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 21:00:01 crc kubenswrapper[4899]: I0126 21:00:01.079002 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 21:00:01 crc kubenswrapper[4899]: I0126 21:00:01.108211 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 21:00:01 crc kubenswrapper[4899]: I0126 21:00:01.161171 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 21:00:01 crc kubenswrapper[4899]: I0126 21:00:01.250857 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 21:00:01 crc kubenswrapper[4899]: I0126 21:00:01.308200 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 21:00:01 crc kubenswrapper[4899]: I0126 21:00:01.396819 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 21:00:01 crc kubenswrapper[4899]: I0126 21:00:01.422435 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 21:00:01 crc kubenswrapper[4899]: I0126 21:00:01.537757 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 21:00:01 crc kubenswrapper[4899]: I0126 21:00:01.648013 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 21:00:01 crc kubenswrapper[4899]: I0126 21:00:01.744571 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 21:00:01 crc kubenswrapper[4899]: I0126 21:00:01.894406 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 21:00:01 crc kubenswrapper[4899]: I0126 21:00:01.949114 4899 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.008576 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.082040 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.086886 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.134114 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.151616 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.189498 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.209134 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.243574 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.462076 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.565139 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.631418 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.634238 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.643633 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.662462 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.832014 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.855587 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.862765 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.927770 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.964046 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 21:00:02 crc kubenswrapper[4899]: I0126 21:00:02.995136 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 21:00:03 crc kubenswrapper[4899]: I0126 21:00:03.045922 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 21:00:03 crc kubenswrapper[4899]: I0126 21:00:03.067734 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 21:00:03 crc kubenswrapper[4899]: I0126 21:00:03.067849 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 21:00:03 crc kubenswrapper[4899]: I0126 21:00:03.076919 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 21:00:03 crc kubenswrapper[4899]: I0126 21:00:03.093970 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 21:00:03 crc kubenswrapper[4899]: I0126 21:00:03.195144 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 21:00:03 crc kubenswrapper[4899]: I0126 21:00:03.226464 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 21:00:03 crc kubenswrapper[4899]: I0126 21:00:03.433131 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 21:00:03 crc kubenswrapper[4899]: I0126 21:00:03.434731 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 21:00:03 crc kubenswrapper[4899]: I0126 21:00:03.454859 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 21:00:03 crc kubenswrapper[4899]: I0126 21:00:03.481862 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 21:00:03 crc kubenswrapper[4899]: I0126 21:00:03.552966 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 21:00:03 crc kubenswrapper[4899]: I0126 21:00:03.569624 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 21:00:03 crc kubenswrapper[4899]: I0126 21:00:03.597045 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 21:00:03 crc kubenswrapper[4899]: I0126 21:00:03.749400 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.044978 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.045000 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.058424 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.088532 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.157165 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.177644 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.203692 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.275127 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.302524 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.381753 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.416774 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.470840 4899 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.477872 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.628809 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.645270 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.670948 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.696850 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.716750 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.756706 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.769282 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.773886 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.826480 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.843036 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.853184 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.854658 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.914511 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.914558 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.954109 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.965898 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 21:00:04 crc kubenswrapper[4899]: I0126 21:00:04.997403 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 21:00:05 crc kubenswrapper[4899]: I0126 21:00:05.038003 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 21:00:05 crc kubenswrapper[4899]: I0126 21:00:05.274199 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 21:00:05 crc kubenswrapper[4899]: I0126 21:00:05.356406 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 21:00:05 crc kubenswrapper[4899]: I0126 21:00:05.462455 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 21:00:05 crc kubenswrapper[4899]: I0126 21:00:05.522031 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 21:00:05 crc kubenswrapper[4899]: I0126 21:00:05.586384 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 21:00:05 crc kubenswrapper[4899]: I0126 21:00:05.646706 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 21:00:05 crc kubenswrapper[4899]: I0126 21:00:05.698568 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 21:00:05 crc kubenswrapper[4899]: I0126 21:00:05.707404 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 21:00:05 crc kubenswrapper[4899]: I0126 21:00:05.707873 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 21:00:05 crc kubenswrapper[4899]: I0126 21:00:05.738717 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 21:00:05 crc kubenswrapper[4899]: I0126 21:00:05.816005 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 21:00:05 crc kubenswrapper[4899]: I0126 21:00:05.899869 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 21:00:05 crc kubenswrapper[4899]: I0126 21:00:05.999788 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.007669 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.056591 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.069193 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.097219 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.227737 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.298390 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.303603 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.333282 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.355607 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.411889 4899 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.414324 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-chl8v" podStartSLOduration=34.311138606 podStartE2EDuration="2m21.414311385s" podCreationTimestamp="2026-01-26 20:57:45 +0000 UTC" firstStartedPulling="2026-01-26 20:57:47.921155842 +0000 UTC m=+157.302743879" lastFinishedPulling="2026-01-26 20:59:35.024328621 +0000 UTC m=+264.405916658" observedRunningTime="2026-01-26 20:59:45.698876066 +0000 UTC m=+275.080464113" watchObservedRunningTime="2026-01-26 21:00:06.414311385 +0000 UTC m=+295.795899422" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.414628 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w86jl" podStartSLOduration=33.239507772 podStartE2EDuration="2m21.414622642s" podCreationTimestamp="2026-01-26 20:57:45 +0000 UTC" firstStartedPulling="2026-01-26 20:57:46.874444486 +0000 UTC m=+156.256032523" lastFinishedPulling="2026-01-26 20:59:35.049559356 +0000 UTC m=+264.431147393" observedRunningTime="2026-01-26 20:59:45.764216647 +0000 UTC m=+275.145804684" watchObservedRunningTime="2026-01-26 21:00:06.414622642 +0000 UTC m=+295.796210679" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.415170 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.415163454 podStartE2EDuration="44.415163454s" podCreationTimestamp="2026-01-26 20:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:59:45.7249847 +0000 UTC m=+275.106572737" watchObservedRunningTime="2026-01-26 21:00:06.415163454 +0000 UTC m=+295.796751491" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.415534 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-848ms" podStartSLOduration=47.768025068 podStartE2EDuration="2m18.415528553s" podCreationTimestamp="2026-01-26 20:57:48 +0000 UTC" firstStartedPulling="2026-01-26 20:57:51.124952083 +0000 UTC m=+160.506540120" lastFinishedPulling="2026-01-26 20:59:21.772455568 +0000 UTC m=+251.154043605" observedRunningTime="2026-01-26 20:59:45.745635231 +0000 UTC m=+275.127223268" watchObservedRunningTime="2026-01-26 21:00:06.415528553 +0000 UTC m=+295.797116590" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.416131 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/community-operators-n87nz","openshift-marketplace/redhat-operators-p49cb","openshift-marketplace/redhat-marketplace-chbw8"] Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.416201 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.416225 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qbjp6","openshift-marketplace/certified-operators-chl8v","openshift-marketplace/certified-operators-w86jl","openshift-marketplace/marketplace-operator-79b997595-xl68z","openshift-marketplace/redhat-operators-848ms","openshift-marketplace/redhat-marketplace-6cqkt"] Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.416764 4899 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.416793 4899 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2610faf8-a867-4ee6-a1c0-1a0a1e24cfaf" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.416803 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-848ms" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" containerName="registry-server" containerID="cri-o://abbb31daba2bf2a710f97e03756d01c148118199c1cddf9bd6f5bfa3001ae0d9" gracePeriod=30 Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.416983 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" podUID="c98d3776-03b4-4c7c-b106-4ca47db60dac" containerName="marketplace-operator" containerID="cri-o://b5c4177b927f5ad572065eeb4825658044d35cccfa36a493fb61059c478551d3" gracePeriod=30 Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.418129 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qbjp6" podUID="8c4e1101-fd5e-41c2-9d33-e08d7c529c70" containerName="registry-server" containerID="cri-o://a3f2d2f1dc5d3ce9940eb7ead3be69676fe88b0e9b43baf10c771a469cb6a0f0" gracePeriod=30 Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.418214 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-chl8v" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" containerName="registry-server" containerID="cri-o://d8f9fa5a9f29238e1b69999fd2387fe105142bc4361630667efcdd11b524d2c4" gracePeriod=30 Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.418343 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w86jl" podUID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" containerName="registry-server" containerID="cri-o://08fcb5a02c48304bd43441f9dedd0e2e175313ccc065df40c0c6e050a91a14df" gracePeriod=30 Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.418471 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6cqkt" podUID="6e1db38d-09be-44c8-b4d8-636629805c3c" containerName="registry-server" containerID="cri-o://aa8dbad9196a4f3277f4c5bb943c2bf0ee7c9e75feb06dc2c0a4a52e4cc92681" gracePeriod=30 Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.418842 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.420229 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.421056 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.430311 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.442543 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.442526246 podStartE2EDuration="21.442526246s" podCreationTimestamp="2026-01-26 20:59:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:00:06.441995264 +0000 UTC m=+295.823583311" watchObservedRunningTime="2026-01-26 21:00:06.442526246 +0000 UTC m=+295.824114283" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.538577 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.538749 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.580994 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.881401 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.915715 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.933463 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.940294 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" path="/var/lib/kubelet/pods/0b25cc74-1abf-4d2c-b95f-7179eb518d9c/volumes" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.941437 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" path="/var/lib/kubelet/pods/858babe5-eeb7-4ab9-a863-68e0c7a61ee7/volumes" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.942030 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" path="/var/lib/kubelet/pods/f44aa611-a197-45c2-b4c4-7578006901e1/volumes" Jan 26 21:00:06 crc kubenswrapper[4899]: I0126 21:00:06.947502 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.080219 4899 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.080436 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://f772e96346cbbdacf9fa8c42582a80e9cf2f4d44009e7c8efe609192306b9a73" gracePeriod=5 Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.084019 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.284557 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.310529 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.419245 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.460571 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-848ms" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.462476 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.492987 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.519426 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.523266 4899 generic.go:334] "Generic (PLEG): container finished" podID="8c4e1101-fd5e-41c2-9d33-e08d7c529c70" containerID="a3f2d2f1dc5d3ce9940eb7ead3be69676fe88b0e9b43baf10c771a469cb6a0f0" exitCode=0 Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.523335 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbjp6" event={"ID":"8c4e1101-fd5e-41c2-9d33-e08d7c529c70","Type":"ContainerDied","Data":"a3f2d2f1dc5d3ce9940eb7ead3be69676fe88b0e9b43baf10c771a469cb6a0f0"} Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.525855 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.526319 4899 generic.go:334] "Generic (PLEG): container finished" podID="0bb3afa9-f123-45d3-817a-e5232b62b483" containerID="abbb31daba2bf2a710f97e03756d01c148118199c1cddf9bd6f5bfa3001ae0d9" exitCode=0 Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.526376 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-848ms" event={"ID":"0bb3afa9-f123-45d3-817a-e5232b62b483","Type":"ContainerDied","Data":"abbb31daba2bf2a710f97e03756d01c148118199c1cddf9bd6f5bfa3001ae0d9"} Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.526394 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-848ms" event={"ID":"0bb3afa9-f123-45d3-817a-e5232b62b483","Type":"ContainerDied","Data":"cf5f04f060cc3d6bce32e5b6f87a3e84cc5d676ceab85c803b2beb929507970e"} Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.526438 4899 scope.go:117] "RemoveContainer" containerID="abbb31daba2bf2a710f97e03756d01c148118199c1cddf9bd6f5bfa3001ae0d9" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.526608 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-848ms" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.540020 4899 generic.go:334] "Generic (PLEG): container finished" podID="6e1db38d-09be-44c8-b4d8-636629805c3c" containerID="aa8dbad9196a4f3277f4c5bb943c2bf0ee7c9e75feb06dc2c0a4a52e4cc92681" exitCode=0 Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.540087 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cqkt" event={"ID":"6e1db38d-09be-44c8-b4d8-636629805c3c","Type":"ContainerDied","Data":"aa8dbad9196a4f3277f4c5bb943c2bf0ee7c9e75feb06dc2c0a4a52e4cc92681"} Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.540781 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cqkt" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.543635 4899 generic.go:334] "Generic (PLEG): container finished" podID="47abc2e2-8494-4bc8-b946-46cbd5079434" containerID="d8f9fa5a9f29238e1b69999fd2387fe105142bc4361630667efcdd11b524d2c4" exitCode=0 Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.543693 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chl8v" event={"ID":"47abc2e2-8494-4bc8-b946-46cbd5079434","Type":"ContainerDied","Data":"d8f9fa5a9f29238e1b69999fd2387fe105142bc4361630667efcdd11b524d2c4"} Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.544649 4899 generic.go:334] "Generic (PLEG): container finished" podID="c98d3776-03b4-4c7c-b106-4ca47db60dac" containerID="b5c4177b927f5ad572065eeb4825658044d35cccfa36a493fb61059c478551d3" exitCode=0 Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.544687 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" event={"ID":"c98d3776-03b4-4c7c-b106-4ca47db60dac","Type":"ContainerDied","Data":"b5c4177b927f5ad572065eeb4825658044d35cccfa36a493fb61059c478551d3"} Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.544742 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xl68z" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.554679 4899 generic.go:334] "Generic (PLEG): container finished" podID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" containerID="08fcb5a02c48304bd43441f9dedd0e2e175313ccc065df40c0c6e050a91a14df" exitCode=0 Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.554891 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w86jl" event={"ID":"3080d09d-fb91-4cbf-84fe-2b96c34968ba","Type":"ContainerDied","Data":"08fcb5a02c48304bd43441f9dedd0e2e175313ccc065df40c0c6e050a91a14df"} Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.573360 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w86jl" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.576014 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbjp6" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.580022 4899 scope.go:117] "RemoveContainer" containerID="cb4bc13a05eb252a06f76d285fe353b5a4be109bc87610b8d0db544ab6e9fccd" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.580790 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chl8v" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.591028 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.606733 4899 scope.go:117] "RemoveContainer" containerID="34aa22a144a1c6510a2f654b0b4486d70cd94617cdb527361251f12ef66d8812" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.609177 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.639679 4899 scope.go:117] "RemoveContainer" containerID="abbb31daba2bf2a710f97e03756d01c148118199c1cddf9bd6f5bfa3001ae0d9" Jan 26 21:00:07 crc kubenswrapper[4899]: E0126 21:00:07.640349 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abbb31daba2bf2a710f97e03756d01c148118199c1cddf9bd6f5bfa3001ae0d9\": container with ID starting with abbb31daba2bf2a710f97e03756d01c148118199c1cddf9bd6f5bfa3001ae0d9 not found: ID does not exist" containerID="abbb31daba2bf2a710f97e03756d01c148118199c1cddf9bd6f5bfa3001ae0d9" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.640389 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abbb31daba2bf2a710f97e03756d01c148118199c1cddf9bd6f5bfa3001ae0d9"} err="failed to get container status \"abbb31daba2bf2a710f97e03756d01c148118199c1cddf9bd6f5bfa3001ae0d9\": rpc error: code = NotFound desc = could not find container \"abbb31daba2bf2a710f97e03756d01c148118199c1cddf9bd6f5bfa3001ae0d9\": container with ID starting with abbb31daba2bf2a710f97e03756d01c148118199c1cddf9bd6f5bfa3001ae0d9 not found: ID does not exist" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.640414 4899 scope.go:117] "RemoveContainer" containerID="cb4bc13a05eb252a06f76d285fe353b5a4be109bc87610b8d0db544ab6e9fccd" Jan 26 21:00:07 crc kubenswrapper[4899]: E0126 21:00:07.640749 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb4bc13a05eb252a06f76d285fe353b5a4be109bc87610b8d0db544ab6e9fccd\": container with ID starting with cb4bc13a05eb252a06f76d285fe353b5a4be109bc87610b8d0db544ab6e9fccd not found: ID does not exist" containerID="cb4bc13a05eb252a06f76d285fe353b5a4be109bc87610b8d0db544ab6e9fccd" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.640888 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb4bc13a05eb252a06f76d285fe353b5a4be109bc87610b8d0db544ab6e9fccd"} err="failed to get container status \"cb4bc13a05eb252a06f76d285fe353b5a4be109bc87610b8d0db544ab6e9fccd\": rpc error: code = NotFound desc = could not find container \"cb4bc13a05eb252a06f76d285fe353b5a4be109bc87610b8d0db544ab6e9fccd\": container with ID starting with cb4bc13a05eb252a06f76d285fe353b5a4be109bc87610b8d0db544ab6e9fccd not found: ID does not exist" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.640911 4899 scope.go:117] "RemoveContainer" containerID="34aa22a144a1c6510a2f654b0b4486d70cd94617cdb527361251f12ef66d8812" Jan 26 21:00:07 crc kubenswrapper[4899]: E0126 21:00:07.642781 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34aa22a144a1c6510a2f654b0b4486d70cd94617cdb527361251f12ef66d8812\": container with ID starting with 34aa22a144a1c6510a2f654b0b4486d70cd94617cdb527361251f12ef66d8812 not found: ID does not exist" containerID="34aa22a144a1c6510a2f654b0b4486d70cd94617cdb527361251f12ef66d8812" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.642808 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34aa22a144a1c6510a2f654b0b4486d70cd94617cdb527361251f12ef66d8812"} err="failed to get container status \"34aa22a144a1c6510a2f654b0b4486d70cd94617cdb527361251f12ef66d8812\": rpc error: code = NotFound desc = could not find container \"34aa22a144a1c6510a2f654b0b4486d70cd94617cdb527361251f12ef66d8812\": container with ID starting with 34aa22a144a1c6510a2f654b0b4486d70cd94617cdb527361251f12ef66d8812 not found: ID does not exist" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.642828 4899 scope.go:117] "RemoveContainer" containerID="aa8dbad9196a4f3277f4c5bb943c2bf0ee7c9e75feb06dc2c0a4a52e4cc92681" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.648287 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e1db38d-09be-44c8-b4d8-636629805c3c-utilities\") pod \"6e1db38d-09be-44c8-b4d8-636629805c3c\" (UID: \"6e1db38d-09be-44c8-b4d8-636629805c3c\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.648338 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb3afa9-f123-45d3-817a-e5232b62b483-catalog-content\") pod \"0bb3afa9-f123-45d3-817a-e5232b62b483\" (UID: \"0bb3afa9-f123-45d3-817a-e5232b62b483\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.648390 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c98d3776-03b4-4c7c-b106-4ca47db60dac-marketplace-trusted-ca\") pod \"c98d3776-03b4-4c7c-b106-4ca47db60dac\" (UID: \"c98d3776-03b4-4c7c-b106-4ca47db60dac\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.648423 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh4nd\" (UniqueName: \"kubernetes.io/projected/0bb3afa9-f123-45d3-817a-e5232b62b483-kube-api-access-zh4nd\") pod \"0bb3afa9-f123-45d3-817a-e5232b62b483\" (UID: \"0bb3afa9-f123-45d3-817a-e5232b62b483\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.648482 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c98d3776-03b4-4c7c-b106-4ca47db60dac-marketplace-operator-metrics\") pod \"c98d3776-03b4-4c7c-b106-4ca47db60dac\" (UID: \"c98d3776-03b4-4c7c-b106-4ca47db60dac\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.648523 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e1db38d-09be-44c8-b4d8-636629805c3c-catalog-content\") pod \"6e1db38d-09be-44c8-b4d8-636629805c3c\" (UID: \"6e1db38d-09be-44c8-b4d8-636629805c3c\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.648556 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb3afa9-f123-45d3-817a-e5232b62b483-utilities\") pod \"0bb3afa9-f123-45d3-817a-e5232b62b483\" (UID: \"0bb3afa9-f123-45d3-817a-e5232b62b483\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.648616 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f46s\" (UniqueName: \"kubernetes.io/projected/c98d3776-03b4-4c7c-b106-4ca47db60dac-kube-api-access-4f46s\") pod \"c98d3776-03b4-4c7c-b106-4ca47db60dac\" (UID: \"c98d3776-03b4-4c7c-b106-4ca47db60dac\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.648647 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg6js\" (UniqueName: \"kubernetes.io/projected/6e1db38d-09be-44c8-b4d8-636629805c3c-kube-api-access-lg6js\") pod \"6e1db38d-09be-44c8-b4d8-636629805c3c\" (UID: \"6e1db38d-09be-44c8-b4d8-636629805c3c\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.649747 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bb3afa9-f123-45d3-817a-e5232b62b483-utilities" (OuterVolumeSpecName: "utilities") pod "0bb3afa9-f123-45d3-817a-e5232b62b483" (UID: "0bb3afa9-f123-45d3-817a-e5232b62b483"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.649803 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c98d3776-03b4-4c7c-b106-4ca47db60dac-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c98d3776-03b4-4c7c-b106-4ca47db60dac" (UID: "c98d3776-03b4-4c7c-b106-4ca47db60dac"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.650284 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e1db38d-09be-44c8-b4d8-636629805c3c-utilities" (OuterVolumeSpecName: "utilities") pod "6e1db38d-09be-44c8-b4d8-636629805c3c" (UID: "6e1db38d-09be-44c8-b4d8-636629805c3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.654005 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c98d3776-03b4-4c7c-b106-4ca47db60dac-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c98d3776-03b4-4c7c-b106-4ca47db60dac" (UID: "c98d3776-03b4-4c7c-b106-4ca47db60dac"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.654282 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c98d3776-03b4-4c7c-b106-4ca47db60dac-kube-api-access-4f46s" (OuterVolumeSpecName: "kube-api-access-4f46s") pod "c98d3776-03b4-4c7c-b106-4ca47db60dac" (UID: "c98d3776-03b4-4c7c-b106-4ca47db60dac"). InnerVolumeSpecName "kube-api-access-4f46s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.655349 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb3afa9-f123-45d3-817a-e5232b62b483-kube-api-access-zh4nd" (OuterVolumeSpecName: "kube-api-access-zh4nd") pod "0bb3afa9-f123-45d3-817a-e5232b62b483" (UID: "0bb3afa9-f123-45d3-817a-e5232b62b483"). InnerVolumeSpecName "kube-api-access-zh4nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.657585 4899 scope.go:117] "RemoveContainer" containerID="d0ae5c4fe9de3e733bb6d41a39af3631f007b44eb0a9860f8198562be7d60a73" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.658056 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e1db38d-09be-44c8-b4d8-636629805c3c-kube-api-access-lg6js" (OuterVolumeSpecName: "kube-api-access-lg6js") pod "6e1db38d-09be-44c8-b4d8-636629805c3c" (UID: "6e1db38d-09be-44c8-b4d8-636629805c3c"). InnerVolumeSpecName "kube-api-access-lg6js". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.674920 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.679655 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e1db38d-09be-44c8-b4d8-636629805c3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e1db38d-09be-44c8-b4d8-636629805c3c" (UID: "6e1db38d-09be-44c8-b4d8-636629805c3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.680188 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.680768 4899 scope.go:117] "RemoveContainer" containerID="49f9e92ecb41c5efdffa90625b1cfa6a425a259d6bf1a614e540547c535781f0" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.692915 4899 scope.go:117] "RemoveContainer" containerID="b5c4177b927f5ad572065eeb4825658044d35cccfa36a493fb61059c478551d3" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.749333 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3080d09d-fb91-4cbf-84fe-2b96c34968ba-catalog-content\") pod \"3080d09d-fb91-4cbf-84fe-2b96c34968ba\" (UID: \"3080d09d-fb91-4cbf-84fe-2b96c34968ba\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.749422 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpl4z\" (UniqueName: \"kubernetes.io/projected/3080d09d-fb91-4cbf-84fe-2b96c34968ba-kube-api-access-dpl4z\") pod \"3080d09d-fb91-4cbf-84fe-2b96c34968ba\" (UID: \"3080d09d-fb91-4cbf-84fe-2b96c34968ba\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.749461 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slmj8\" (UniqueName: \"kubernetes.io/projected/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-kube-api-access-slmj8\") pod \"8c4e1101-fd5e-41c2-9d33-e08d7c529c70\" (UID: \"8c4e1101-fd5e-41c2-9d33-e08d7c529c70\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.749530 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47abc2e2-8494-4bc8-b946-46cbd5079434-catalog-content\") pod \"47abc2e2-8494-4bc8-b946-46cbd5079434\" (UID: \"47abc2e2-8494-4bc8-b946-46cbd5079434\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.749564 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-catalog-content\") pod \"8c4e1101-fd5e-41c2-9d33-e08d7c529c70\" (UID: \"8c4e1101-fd5e-41c2-9d33-e08d7c529c70\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.749623 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47abc2e2-8494-4bc8-b946-46cbd5079434-utilities\") pod \"47abc2e2-8494-4bc8-b946-46cbd5079434\" (UID: \"47abc2e2-8494-4bc8-b946-46cbd5079434\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.749689 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bxkz\" (UniqueName: \"kubernetes.io/projected/47abc2e2-8494-4bc8-b946-46cbd5079434-kube-api-access-6bxkz\") pod \"47abc2e2-8494-4bc8-b946-46cbd5079434\" (UID: \"47abc2e2-8494-4bc8-b946-46cbd5079434\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.749730 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3080d09d-fb91-4cbf-84fe-2b96c34968ba-utilities\") pod \"3080d09d-fb91-4cbf-84fe-2b96c34968ba\" (UID: \"3080d09d-fb91-4cbf-84fe-2b96c34968ba\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.749788 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-utilities\") pod \"8c4e1101-fd5e-41c2-9d33-e08d7c529c70\" (UID: \"8c4e1101-fd5e-41c2-9d33-e08d7c529c70\") " Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.750182 4899 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c98d3776-03b4-4c7c-b106-4ca47db60dac-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.750211 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e1db38d-09be-44c8-b4d8-636629805c3c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.750226 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb3afa9-f123-45d3-817a-e5232b62b483-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.750265 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f46s\" (UniqueName: \"kubernetes.io/projected/c98d3776-03b4-4c7c-b106-4ca47db60dac-kube-api-access-4f46s\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.750278 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg6js\" (UniqueName: \"kubernetes.io/projected/6e1db38d-09be-44c8-b4d8-636629805c3c-kube-api-access-lg6js\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.750292 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e1db38d-09be-44c8-b4d8-636629805c3c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.750304 4899 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c98d3776-03b4-4c7c-b106-4ca47db60dac-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.750342 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh4nd\" (UniqueName: \"kubernetes.io/projected/0bb3afa9-f123-45d3-817a-e5232b62b483-kube-api-access-zh4nd\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.751529 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-utilities" (OuterVolumeSpecName: "utilities") pod "8c4e1101-fd5e-41c2-9d33-e08d7c529c70" (UID: "8c4e1101-fd5e-41c2-9d33-e08d7c529c70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.752656 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47abc2e2-8494-4bc8-b946-46cbd5079434-utilities" (OuterVolumeSpecName: "utilities") pod "47abc2e2-8494-4bc8-b946-46cbd5079434" (UID: "47abc2e2-8494-4bc8-b946-46cbd5079434"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.754170 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3080d09d-fb91-4cbf-84fe-2b96c34968ba-utilities" (OuterVolumeSpecName: "utilities") pod "3080d09d-fb91-4cbf-84fe-2b96c34968ba" (UID: "3080d09d-fb91-4cbf-84fe-2b96c34968ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.755281 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47abc2e2-8494-4bc8-b946-46cbd5079434-kube-api-access-6bxkz" (OuterVolumeSpecName: "kube-api-access-6bxkz") pod "47abc2e2-8494-4bc8-b946-46cbd5079434" (UID: "47abc2e2-8494-4bc8-b946-46cbd5079434"). InnerVolumeSpecName "kube-api-access-6bxkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.756230 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3080d09d-fb91-4cbf-84fe-2b96c34968ba-kube-api-access-dpl4z" (OuterVolumeSpecName: "kube-api-access-dpl4z") pod "3080d09d-fb91-4cbf-84fe-2b96c34968ba" (UID: "3080d09d-fb91-4cbf-84fe-2b96c34968ba"). InnerVolumeSpecName "kube-api-access-dpl4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.756753 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-kube-api-access-slmj8" (OuterVolumeSpecName: "kube-api-access-slmj8") pod "8c4e1101-fd5e-41c2-9d33-e08d7c529c70" (UID: "8c4e1101-fd5e-41c2-9d33-e08d7c529c70"). InnerVolumeSpecName "kube-api-access-slmj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.766884 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bb3afa9-f123-45d3-817a-e5232b62b483-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bb3afa9-f123-45d3-817a-e5232b62b483" (UID: "0bb3afa9-f123-45d3-817a-e5232b62b483"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.806257 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3080d09d-fb91-4cbf-84fe-2b96c34968ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3080d09d-fb91-4cbf-84fe-2b96c34968ba" (UID: "3080d09d-fb91-4cbf-84fe-2b96c34968ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.807324 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47abc2e2-8494-4bc8-b946-46cbd5079434-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47abc2e2-8494-4bc8-b946-46cbd5079434" (UID: "47abc2e2-8494-4bc8-b946-46cbd5079434"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.814824 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c4e1101-fd5e-41c2-9d33-e08d7c529c70" (UID: "8c4e1101-fd5e-41c2-9d33-e08d7c529c70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.851387 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb3afa9-f123-45d3-817a-e5232b62b483-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.851414 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3080d09d-fb91-4cbf-84fe-2b96c34968ba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.851423 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpl4z\" (UniqueName: \"kubernetes.io/projected/3080d09d-fb91-4cbf-84fe-2b96c34968ba-kube-api-access-dpl4z\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.851434 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slmj8\" (UniqueName: \"kubernetes.io/projected/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-kube-api-access-slmj8\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.851443 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47abc2e2-8494-4bc8-b946-46cbd5079434-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.851453 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.851462 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47abc2e2-8494-4bc8-b946-46cbd5079434-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.851470 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bxkz\" (UniqueName: \"kubernetes.io/projected/47abc2e2-8494-4bc8-b946-46cbd5079434-kube-api-access-6bxkz\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.851479 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3080d09d-fb91-4cbf-84fe-2b96c34968ba-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.851486 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c4e1101-fd5e-41c2-9d33-e08d7c529c70-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.857756 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-848ms"] Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.863666 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-848ms"] Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.871568 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cqkt"] Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.876580 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cqkt"] Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.880917 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl68z"] Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.883574 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl68z"] Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.923174 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 21:00:07 crc kubenswrapper[4899]: I0126 21:00:07.986460 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.052274 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.062682 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.149246 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.246899 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.407947 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.564824 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chl8v" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.564859 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chl8v" event={"ID":"47abc2e2-8494-4bc8-b946-46cbd5079434","Type":"ContainerDied","Data":"d85b8cdf8969c6c45c1e4e64d44ea9f8114b914c808d80f99935f73f136f5162"} Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.565310 4899 scope.go:117] "RemoveContainer" containerID="d8f9fa5a9f29238e1b69999fd2387fe105142bc4361630667efcdd11b524d2c4" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.567898 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w86jl" event={"ID":"3080d09d-fb91-4cbf-84fe-2b96c34968ba","Type":"ContainerDied","Data":"a067fe5a51be68c9f3aa1f72bb0a34dfa178158beb5192a98fb1e1b74fdf291f"} Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.568042 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w86jl" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.569691 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbjp6" event={"ID":"8c4e1101-fd5e-41c2-9d33-e08d7c529c70","Type":"ContainerDied","Data":"73abdea89dd235b3ac4243258c93d69049966fba25cc3e454320701bff5f93c8"} Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.569798 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbjp6" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.581232 4899 scope.go:117] "RemoveContainer" containerID="f8ca15a9f811bb422d7bf57171d8f8acd208fafe2472c59cc8e98c538e76f559" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.599268 4899 scope.go:117] "RemoveContainer" containerID="f3f219cfd81cf720bba5d801c32cf65a3c49fc7c2a3d5a23e4f0d2a0f72fd83c" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.627448 4899 scope.go:117] "RemoveContainer" containerID="08fcb5a02c48304bd43441f9dedd0e2e175313ccc065df40c0c6e050a91a14df" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.627894 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qbjp6"] Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.630618 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.632003 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qbjp6"] Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.645147 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-chl8v"] Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.650484 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-chl8v"] Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.653333 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w86jl"] Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.654119 4899 scope.go:117] "RemoveContainer" containerID="7285a3615ddf0736013b3b35bf3df2676b2d19def04afe698b8f8ef3791a3d34" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.656088 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w86jl"] Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.667550 4899 scope.go:117] "RemoveContainer" containerID="50bfb42d74454f5ef1ffe4e7004e3b9516e86b3dea4902880e636ec63988ebc7" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.680227 4899 scope.go:117] "RemoveContainer" containerID="a3f2d2f1dc5d3ce9940eb7ead3be69676fe88b0e9b43baf10c771a469cb6a0f0" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.695297 4899 scope.go:117] "RemoveContainer" containerID="9c7f367667d56f86df2c9ee0936f992e811203912375326ca987b46a4ddb0bfd" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.708142 4899 scope.go:117] "RemoveContainer" containerID="2f899a4c1886730645c14574b5a8716a1dd8fa8707f0351185387c2bb059444b" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.709127 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.751957 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.752917 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.798238 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.829966 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.881251 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.917801 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.921850 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.951463 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" path="/var/lib/kubelet/pods/0bb3afa9-f123-45d3-817a-e5232b62b483/volumes" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.953861 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" path="/var/lib/kubelet/pods/3080d09d-fb91-4cbf-84fe-2b96c34968ba/volumes" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.955195 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" path="/var/lib/kubelet/pods/47abc2e2-8494-4bc8-b946-46cbd5079434/volumes" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.957355 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e1db38d-09be-44c8-b4d8-636629805c3c" path="/var/lib/kubelet/pods/6e1db38d-09be-44c8-b4d8-636629805c3c/volumes" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.958665 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c4e1101-fd5e-41c2-9d33-e08d7c529c70" path="/var/lib/kubelet/pods/8c4e1101-fd5e-41c2-9d33-e08d7c529c70/volumes" Jan 26 21:00:08 crc kubenswrapper[4899]: I0126 21:00:08.960893 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c98d3776-03b4-4c7c-b106-4ca47db60dac" path="/var/lib/kubelet/pods/c98d3776-03b4-4c7c-b106-4ca47db60dac/volumes" Jan 26 21:00:09 crc kubenswrapper[4899]: I0126 21:00:09.076219 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 21:00:09 crc kubenswrapper[4899]: I0126 21:00:09.187444 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 21:00:09 crc kubenswrapper[4899]: I0126 21:00:09.442050 4899 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 21:00:09 crc kubenswrapper[4899]: I0126 21:00:09.471555 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 21:00:09 crc kubenswrapper[4899]: I0126 21:00:09.533516 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 21:00:09 crc kubenswrapper[4899]: I0126 21:00:09.659457 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 21:00:09 crc kubenswrapper[4899]: I0126 21:00:09.718980 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 21:00:09 crc kubenswrapper[4899]: I0126 21:00:09.770481 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 21:00:09 crc kubenswrapper[4899]: I0126 21:00:09.841643 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 21:00:09 crc kubenswrapper[4899]: I0126 21:00:09.874855 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 21:00:09 crc kubenswrapper[4899]: I0126 21:00:09.968402 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 21:00:10 crc kubenswrapper[4899]: I0126 21:00:10.005015 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 21:00:10 crc kubenswrapper[4899]: I0126 21:00:10.087639 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 21:00:10 crc kubenswrapper[4899]: I0126 21:00:10.131441 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 21:00:10 crc kubenswrapper[4899]: I0126 21:00:10.133728 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 21:00:10 crc kubenswrapper[4899]: I0126 21:00:10.200355 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 21:00:10 crc kubenswrapper[4899]: I0126 21:00:10.364490 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 21:00:10 crc kubenswrapper[4899]: I0126 21:00:10.428909 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 21:00:10 crc kubenswrapper[4899]: I0126 21:00:10.711554 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 21:00:10 crc kubenswrapper[4899]: I0126 21:00:10.780808 4899 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 21:00:10 crc kubenswrapper[4899]: I0126 21:00:10.809371 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 21:00:10 crc kubenswrapper[4899]: I0126 21:00:10.841078 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 21:00:10 crc kubenswrapper[4899]: I0126 21:00:10.964759 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.039975 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d"] Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040206 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" containerName="extract-utilities" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040219 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" containerName="extract-utilities" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040231 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" containerName="extract-utilities" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040237 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" containerName="extract-utilities" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040249 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040255 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040265 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" containerName="extract-content" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040271 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" containerName="extract-content" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040279 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c4e1101-fd5e-41c2-9d33-e08d7c529c70" containerName="extract-utilities" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040285 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c4e1101-fd5e-41c2-9d33-e08d7c529c70" containerName="extract-utilities" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040294 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" containerName="extract-content" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040300 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" containerName="extract-content" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040308 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" containerName="extract-utilities" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040314 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" containerName="extract-utilities" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040320 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" containerName="extract-content" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040325 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" containerName="extract-content" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040333 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" containerName="extract-utilities" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040338 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" containerName="extract-utilities" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040345 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" containerName="installer" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040351 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" containerName="installer" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040358 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1db38d-09be-44c8-b4d8-636629805c3c" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040364 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1db38d-09be-44c8-b4d8-636629805c3c" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040372 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c4e1101-fd5e-41c2-9d33-e08d7c529c70" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040378 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c4e1101-fd5e-41c2-9d33-e08d7c529c70" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040386 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1db38d-09be-44c8-b4d8-636629805c3c" containerName="extract-utilities" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040392 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1db38d-09be-44c8-b4d8-636629805c3c" containerName="extract-utilities" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040400 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1db38d-09be-44c8-b4d8-636629805c3c" containerName="extract-content" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040405 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1db38d-09be-44c8-b4d8-636629805c3c" containerName="extract-content" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040414 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040420 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040427 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" containerName="extract-utilities" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040435 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" containerName="extract-utilities" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040444 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c4e1101-fd5e-41c2-9d33-e08d7c529c70" containerName="extract-content" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040451 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c4e1101-fd5e-41c2-9d33-e08d7c529c70" containerName="extract-content" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040459 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040464 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040470 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040475 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040483 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" containerName="extract-utilities" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040490 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" containerName="extract-utilities" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040497 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c98d3776-03b4-4c7c-b106-4ca47db60dac" containerName="marketplace-operator" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040502 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98d3776-03b4-4c7c-b106-4ca47db60dac" containerName="marketplace-operator" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040509 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040515 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040523 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" containerName="extract-content" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040528 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" containerName="extract-content" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040536 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" containerName="extract-content" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040541 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" containerName="extract-content" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040549 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040555 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040561 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040566 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 21:00:11 crc kubenswrapper[4899]: E0126 21:00:11.040574 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" containerName="extract-content" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040579 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" containerName="extract-content" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040659 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb6f8e1b-1528-4285-ab7f-2808df5f1b29" containerName="installer" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040669 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e1db38d-09be-44c8-b4d8-636629805c3c" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040677 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="3080d09d-fb91-4cbf-84fe-2b96c34968ba" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040683 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="c98d3776-03b4-4c7c-b106-4ca47db60dac" containerName="marketplace-operator" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040692 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c4e1101-fd5e-41c2-9d33-e08d7c529c70" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040701 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="47abc2e2-8494-4bc8-b946-46cbd5079434" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040711 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="858babe5-eeb7-4ab9-a863-68e0c7a61ee7" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040718 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44aa611-a197-45c2-b4c4-7578006901e1" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040726 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb3afa9-f123-45d3-817a-e5232b62b483" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040737 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.040743 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b25cc74-1abf-4d2c-b95f-7179eb518d9c" containerName="registry-server" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.041114 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.041360 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d"] Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.059219 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.059825 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.130695 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fqdv9"] Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.131364 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.133841 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fqdv9"] Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.137824 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.137992 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.138079 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.138146 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.160814 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.191031 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpzn4\" (UniqueName: \"kubernetes.io/projected/1a140484-8af6-4f7b-8a49-94aa897b82b0-kube-api-access-mpzn4\") pod \"collect-profiles-29491020-72l5d\" (UID: \"1a140484-8af6-4f7b-8a49-94aa897b82b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.191137 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a140484-8af6-4f7b-8a49-94aa897b82b0-config-volume\") pod \"collect-profiles-29491020-72l5d\" (UID: \"1a140484-8af6-4f7b-8a49-94aa897b82b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.191273 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a140484-8af6-4f7b-8a49-94aa897b82b0-secret-volume\") pod \"collect-profiles-29491020-72l5d\" (UID: \"1a140484-8af6-4f7b-8a49-94aa897b82b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.200499 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.270136 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.292238 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpzn4\" (UniqueName: \"kubernetes.io/projected/1a140484-8af6-4f7b-8a49-94aa897b82b0-kube-api-access-mpzn4\") pod \"collect-profiles-29491020-72l5d\" (UID: \"1a140484-8af6-4f7b-8a49-94aa897b82b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.292291 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f86n\" (UniqueName: \"kubernetes.io/projected/6c65153e-2169-4842-9a1c-60b0e20f4255-kube-api-access-4f86n\") pod \"marketplace-operator-79b997595-fqdv9\" (UID: \"6c65153e-2169-4842-9a1c-60b0e20f4255\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.292318 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a140484-8af6-4f7b-8a49-94aa897b82b0-config-volume\") pod \"collect-profiles-29491020-72l5d\" (UID: \"1a140484-8af6-4f7b-8a49-94aa897b82b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.292344 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6c65153e-2169-4842-9a1c-60b0e20f4255-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fqdv9\" (UID: \"6c65153e-2169-4842-9a1c-60b0e20f4255\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.292392 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a140484-8af6-4f7b-8a49-94aa897b82b0-secret-volume\") pod \"collect-profiles-29491020-72l5d\" (UID: \"1a140484-8af6-4f7b-8a49-94aa897b82b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.292412 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c65153e-2169-4842-9a1c-60b0e20f4255-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fqdv9\" (UID: \"6c65153e-2169-4842-9a1c-60b0e20f4255\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.293147 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a140484-8af6-4f7b-8a49-94aa897b82b0-config-volume\") pod \"collect-profiles-29491020-72l5d\" (UID: \"1a140484-8af6-4f7b-8a49-94aa897b82b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.298204 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a140484-8af6-4f7b-8a49-94aa897b82b0-secret-volume\") pod \"collect-profiles-29491020-72l5d\" (UID: \"1a140484-8af6-4f7b-8a49-94aa897b82b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.304624 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.318743 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpzn4\" (UniqueName: \"kubernetes.io/projected/1a140484-8af6-4f7b-8a49-94aa897b82b0-kube-api-access-mpzn4\") pod \"collect-profiles-29491020-72l5d\" (UID: \"1a140484-8af6-4f7b-8a49-94aa897b82b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.394075 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c65153e-2169-4842-9a1c-60b0e20f4255-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fqdv9\" (UID: \"6c65153e-2169-4842-9a1c-60b0e20f4255\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.394172 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f86n\" (UniqueName: \"kubernetes.io/projected/6c65153e-2169-4842-9a1c-60b0e20f4255-kube-api-access-4f86n\") pod \"marketplace-operator-79b997595-fqdv9\" (UID: \"6c65153e-2169-4842-9a1c-60b0e20f4255\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.394209 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6c65153e-2169-4842-9a1c-60b0e20f4255-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fqdv9\" (UID: \"6c65153e-2169-4842-9a1c-60b0e20f4255\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.394262 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.395387 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c65153e-2169-4842-9a1c-60b0e20f4255-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fqdv9\" (UID: \"6c65153e-2169-4842-9a1c-60b0e20f4255\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.409627 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6c65153e-2169-4842-9a1c-60b0e20f4255-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fqdv9\" (UID: \"6c65153e-2169-4842-9a1c-60b0e20f4255\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.433039 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f86n\" (UniqueName: \"kubernetes.io/projected/6c65153e-2169-4842-9a1c-60b0e20f4255-kube-api-access-4f86n\") pod \"marketplace-operator-79b997595-fqdv9\" (UID: \"6c65153e-2169-4842-9a1c-60b0e20f4255\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.462888 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.605404 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d"] Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.745413 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fqdv9"] Jan 26 21:00:11 crc kubenswrapper[4899]: W0126 21:00:11.756686 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c65153e_2169_4842_9a1c_60b0e20f4255.slice/crio-9ce16aa0550e0898e046400395aa25b9b07eaad7c3def1e758434b8ec0a5382c WatchSource:0}: Error finding container 9ce16aa0550e0898e046400395aa25b9b07eaad7c3def1e758434b8ec0a5382c: Status 404 returned error can't find the container with id 9ce16aa0550e0898e046400395aa25b9b07eaad7c3def1e758434b8ec0a5382c Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.784379 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 21:00:11 crc kubenswrapper[4899]: I0126 21:00:11.838303 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.133014 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.612554 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.612866 4899 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="f772e96346cbbdacf9fa8c42582a80e9cf2f4d44009e7c8efe609192306b9a73" exitCode=137 Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.614341 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" event={"ID":"6c65153e-2169-4842-9a1c-60b0e20f4255","Type":"ContainerStarted","Data":"43d45c37adb51d83d17bcfbaf315a18f9d122f4da8b112148b241733140a1a33"} Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.614364 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" event={"ID":"6c65153e-2169-4842-9a1c-60b0e20f4255","Type":"ContainerStarted","Data":"9ce16aa0550e0898e046400395aa25b9b07eaad7c3def1e758434b8ec0a5382c"} Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.615125 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.616756 4899 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fqdv9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.616817 4899 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" podUID="6c65153e-2169-4842-9a1c-60b0e20f4255" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.617245 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" event={"ID":"1a140484-8af6-4f7b-8a49-94aa897b82b0","Type":"ContainerStarted","Data":"88c5e55871aeb995fba3e19097309d16a97d2fdf00a9008d4632fcd549bad8ec"} Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.617274 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" event={"ID":"1a140484-8af6-4f7b-8a49-94aa897b82b0","Type":"ContainerStarted","Data":"2f3dd5ca52183baae39940aa9d1b5470c4ddfccdfb5a3fde4a3f913da3e666bd"} Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.629742 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" podStartSLOduration=1.629720737 podStartE2EDuration="1.629720737s" podCreationTimestamp="2026-01-26 21:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:00:12.629348409 +0000 UTC m=+302.010936496" watchObservedRunningTime="2026-01-26 21:00:12.629720737 +0000 UTC m=+302.011308774" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.644908 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" podStartSLOduration=1.644887186 podStartE2EDuration="1.644887186s" podCreationTimestamp="2026-01-26 21:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:00:12.644136099 +0000 UTC m=+302.025724146" watchObservedRunningTime="2026-01-26 21:00:12.644887186 +0000 UTC m=+302.026475223" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.654844 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.654909 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.814904 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.814981 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.815017 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.815042 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.815113 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.815016 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.815070 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.815096 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.815290 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.815493 4899 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.815508 4899 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.815519 4899 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.815528 4899 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.823882 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.917043 4899 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.940907 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.941371 4899 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.955531 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.955572 4899 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e1e453f2-b417-45b0-8636-2015a19604ea" Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.963422 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 21:00:12 crc kubenswrapper[4899]: I0126 21:00:12.963495 4899 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e1e453f2-b417-45b0-8636-2015a19604ea" Jan 26 21:00:13 crc kubenswrapper[4899]: I0126 21:00:13.624128 4899 generic.go:334] "Generic (PLEG): container finished" podID="1a140484-8af6-4f7b-8a49-94aa897b82b0" containerID="88c5e55871aeb995fba3e19097309d16a97d2fdf00a9008d4632fcd549bad8ec" exitCode=0 Jan 26 21:00:13 crc kubenswrapper[4899]: I0126 21:00:13.624354 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" event={"ID":"1a140484-8af6-4f7b-8a49-94aa897b82b0","Type":"ContainerDied","Data":"88c5e55871aeb995fba3e19097309d16a97d2fdf00a9008d4632fcd549bad8ec"} Jan 26 21:00:13 crc kubenswrapper[4899]: I0126 21:00:13.626528 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 21:00:13 crc kubenswrapper[4899]: I0126 21:00:13.626777 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 21:00:13 crc kubenswrapper[4899]: I0126 21:00:13.627111 4899 scope.go:117] "RemoveContainer" containerID="f772e96346cbbdacf9fa8c42582a80e9cf2f4d44009e7c8efe609192306b9a73" Jan 26 21:00:13 crc kubenswrapper[4899]: I0126 21:00:13.629667 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fqdv9" Jan 26 21:00:14 crc kubenswrapper[4899]: I0126 21:00:14.857822 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" Jan 26 21:00:14 crc kubenswrapper[4899]: I0126 21:00:14.945405 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a140484-8af6-4f7b-8a49-94aa897b82b0-secret-volume\") pod \"1a140484-8af6-4f7b-8a49-94aa897b82b0\" (UID: \"1a140484-8af6-4f7b-8a49-94aa897b82b0\") " Jan 26 21:00:14 crc kubenswrapper[4899]: I0126 21:00:14.945495 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a140484-8af6-4f7b-8a49-94aa897b82b0-config-volume\") pod \"1a140484-8af6-4f7b-8a49-94aa897b82b0\" (UID: \"1a140484-8af6-4f7b-8a49-94aa897b82b0\") " Jan 26 21:00:14 crc kubenswrapper[4899]: I0126 21:00:14.945527 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpzn4\" (UniqueName: \"kubernetes.io/projected/1a140484-8af6-4f7b-8a49-94aa897b82b0-kube-api-access-mpzn4\") pod \"1a140484-8af6-4f7b-8a49-94aa897b82b0\" (UID: \"1a140484-8af6-4f7b-8a49-94aa897b82b0\") " Jan 26 21:00:14 crc kubenswrapper[4899]: I0126 21:00:14.946768 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a140484-8af6-4f7b-8a49-94aa897b82b0-config-volume" (OuterVolumeSpecName: "config-volume") pod "1a140484-8af6-4f7b-8a49-94aa897b82b0" (UID: "1a140484-8af6-4f7b-8a49-94aa897b82b0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:00:14 crc kubenswrapper[4899]: I0126 21:00:14.952551 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a140484-8af6-4f7b-8a49-94aa897b82b0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1a140484-8af6-4f7b-8a49-94aa897b82b0" (UID: "1a140484-8af6-4f7b-8a49-94aa897b82b0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:00:14 crc kubenswrapper[4899]: I0126 21:00:14.952611 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a140484-8af6-4f7b-8a49-94aa897b82b0-kube-api-access-mpzn4" (OuterVolumeSpecName: "kube-api-access-mpzn4") pod "1a140484-8af6-4f7b-8a49-94aa897b82b0" (UID: "1a140484-8af6-4f7b-8a49-94aa897b82b0"). InnerVolumeSpecName "kube-api-access-mpzn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:00:15 crc kubenswrapper[4899]: I0126 21:00:15.047094 4899 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a140484-8af6-4f7b-8a49-94aa897b82b0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:15 crc kubenswrapper[4899]: I0126 21:00:15.047144 4899 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a140484-8af6-4f7b-8a49-94aa897b82b0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:15 crc kubenswrapper[4899]: I0126 21:00:15.047161 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpzn4\" (UniqueName: \"kubernetes.io/projected/1a140484-8af6-4f7b-8a49-94aa897b82b0-kube-api-access-mpzn4\") on node \"crc\" DevicePath \"\"" Jan 26 21:00:15 crc kubenswrapper[4899]: I0126 21:00:15.640726 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" Jan 26 21:00:15 crc kubenswrapper[4899]: I0126 21:00:15.640865 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491020-72l5d" event={"ID":"1a140484-8af6-4f7b-8a49-94aa897b82b0","Type":"ContainerDied","Data":"2f3dd5ca52183baae39940aa9d1b5470c4ddfccdfb5a3fde4a3f913da3e666bd"} Jan 26 21:00:15 crc kubenswrapper[4899]: I0126 21:00:15.640943 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f3dd5ca52183baae39940aa9d1b5470c4ddfccdfb5a3fde4a3f913da3e666bd" Jan 26 21:00:37 crc kubenswrapper[4899]: I0126 21:00:37.659908 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 21:01:00 crc kubenswrapper[4899]: I0126 21:01:00.109062 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:01:00 crc kubenswrapper[4899]: I0126 21:01:00.110031 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.285961 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dq8kh"] Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.286540 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" podUID="676ef23d-20dd-4ccb-b846-b83c71305d24" containerName="controller-manager" containerID="cri-o://1637720d41e277636e182aa0575a241dbfc4110e9c916ae08efecbaf34c0e0c4" gracePeriod=30 Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.388487 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d"] Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.388730 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" podUID="bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2" containerName="route-controller-manager" containerID="cri-o://1314f8a8a71ab1ee786381ba969e0c0ec47c79a1501df3e5d3550ec23d583e7c" gracePeriod=30 Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.612982 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.701824 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/676ef23d-20dd-4ccb-b846-b83c71305d24-serving-cert\") pod \"676ef23d-20dd-4ccb-b846-b83c71305d24\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.701987 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-config\") pod \"676ef23d-20dd-4ccb-b846-b83c71305d24\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.702046 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2vdp\" (UniqueName: \"kubernetes.io/projected/676ef23d-20dd-4ccb-b846-b83c71305d24-kube-api-access-d2vdp\") pod \"676ef23d-20dd-4ccb-b846-b83c71305d24\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.702121 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-client-ca\") pod \"676ef23d-20dd-4ccb-b846-b83c71305d24\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.702143 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-proxy-ca-bundles\") pod \"676ef23d-20dd-4ccb-b846-b83c71305d24\" (UID: \"676ef23d-20dd-4ccb-b846-b83c71305d24\") " Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.702974 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.703021 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-client-ca" (OuterVolumeSpecName: "client-ca") pod "676ef23d-20dd-4ccb-b846-b83c71305d24" (UID: "676ef23d-20dd-4ccb-b846-b83c71305d24"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.703030 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "676ef23d-20dd-4ccb-b846-b83c71305d24" (UID: "676ef23d-20dd-4ccb-b846-b83c71305d24"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.703062 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-config" (OuterVolumeSpecName: "config") pod "676ef23d-20dd-4ccb-b846-b83c71305d24" (UID: "676ef23d-20dd-4ccb-b846-b83c71305d24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.708192 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/676ef23d-20dd-4ccb-b846-b83c71305d24-kube-api-access-d2vdp" (OuterVolumeSpecName: "kube-api-access-d2vdp") pod "676ef23d-20dd-4ccb-b846-b83c71305d24" (UID: "676ef23d-20dd-4ccb-b846-b83c71305d24"). InnerVolumeSpecName "kube-api-access-d2vdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.708338 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/676ef23d-20dd-4ccb-b846-b83c71305d24-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "676ef23d-20dd-4ccb-b846-b83c71305d24" (UID: "676ef23d-20dd-4ccb-b846-b83c71305d24"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.803693 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-client-ca\") pod \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\" (UID: \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\") " Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.803765 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-serving-cert\") pod \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\" (UID: \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\") " Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.803830 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-config\") pod \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\" (UID: \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\") " Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.803864 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkwx9\" (UniqueName: \"kubernetes.io/projected/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-kube-api-access-dkwx9\") pod \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\" (UID: \"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2\") " Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.804141 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-config\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.804159 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2vdp\" (UniqueName: \"kubernetes.io/projected/676ef23d-20dd-4ccb-b846-b83c71305d24-kube-api-access-d2vdp\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.804172 4899 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.804184 4899 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/676ef23d-20dd-4ccb-b846-b83c71305d24-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.804194 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/676ef23d-20dd-4ccb-b846-b83c71305d24-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.804494 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-client-ca" (OuterVolumeSpecName: "client-ca") pod "bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2" (UID: "bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.804614 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-config" (OuterVolumeSpecName: "config") pod "bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2" (UID: "bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.814390 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2" (UID: "bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.815792 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-kube-api-access-dkwx9" (OuterVolumeSpecName: "kube-api-access-dkwx9") pod "bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2" (UID: "bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2"). InnerVolumeSpecName "kube-api-access-dkwx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.906010 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.906051 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-config\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.906066 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkwx9\" (UniqueName: \"kubernetes.io/projected/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-kube-api-access-dkwx9\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.906081 4899 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.924513 4899 generic.go:334] "Generic (PLEG): container finished" podID="bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2" containerID="1314f8a8a71ab1ee786381ba969e0c0ec47c79a1501df3e5d3550ec23d583e7c" exitCode=0 Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.924643 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.926145 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" event={"ID":"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2","Type":"ContainerDied","Data":"1314f8a8a71ab1ee786381ba969e0c0ec47c79a1501df3e5d3550ec23d583e7c"} Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.926206 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d" event={"ID":"bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2","Type":"ContainerDied","Data":"4c1bbbc6cdcff3dcc9725ba7d3e88e3dba5a265f6a5e378631e16c83c22ea22e"} Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.926229 4899 scope.go:117] "RemoveContainer" containerID="1314f8a8a71ab1ee786381ba969e0c0ec47c79a1501df3e5d3550ec23d583e7c" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.928019 4899 generic.go:334] "Generic (PLEG): container finished" podID="676ef23d-20dd-4ccb-b846-b83c71305d24" containerID="1637720d41e277636e182aa0575a241dbfc4110e9c916ae08efecbaf34c0e0c4" exitCode=0 Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.928048 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" event={"ID":"676ef23d-20dd-4ccb-b846-b83c71305d24","Type":"ContainerDied","Data":"1637720d41e277636e182aa0575a241dbfc4110e9c916ae08efecbaf34c0e0c4"} Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.928066 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" event={"ID":"676ef23d-20dd-4ccb-b846-b83c71305d24","Type":"ContainerDied","Data":"0cf350b47bfbeb438572584b967e194d5b2569dee03f5ec43693e2d65d992af7"} Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.928122 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dq8kh" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.960280 4899 scope.go:117] "RemoveContainer" containerID="1314f8a8a71ab1ee786381ba969e0c0ec47c79a1501df3e5d3550ec23d583e7c" Jan 26 21:01:02 crc kubenswrapper[4899]: E0126 21:01:02.961183 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1314f8a8a71ab1ee786381ba969e0c0ec47c79a1501df3e5d3550ec23d583e7c\": container with ID starting with 1314f8a8a71ab1ee786381ba969e0c0ec47c79a1501df3e5d3550ec23d583e7c not found: ID does not exist" containerID="1314f8a8a71ab1ee786381ba969e0c0ec47c79a1501df3e5d3550ec23d583e7c" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.961219 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1314f8a8a71ab1ee786381ba969e0c0ec47c79a1501df3e5d3550ec23d583e7c"} err="failed to get container status \"1314f8a8a71ab1ee786381ba969e0c0ec47c79a1501df3e5d3550ec23d583e7c\": rpc error: code = NotFound desc = could not find container \"1314f8a8a71ab1ee786381ba969e0c0ec47c79a1501df3e5d3550ec23d583e7c\": container with ID starting with 1314f8a8a71ab1ee786381ba969e0c0ec47c79a1501df3e5d3550ec23d583e7c not found: ID does not exist" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.961241 4899 scope.go:117] "RemoveContainer" containerID="1637720d41e277636e182aa0575a241dbfc4110e9c916ae08efecbaf34c0e0c4" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.970384 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d"] Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.973780 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tr24d"] Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.980478 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dq8kh"] Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.993512 4899 scope.go:117] "RemoveContainer" containerID="1637720d41e277636e182aa0575a241dbfc4110e9c916ae08efecbaf34c0e0c4" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.993909 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dq8kh"] Jan 26 21:01:02 crc kubenswrapper[4899]: E0126 21:01:02.994088 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1637720d41e277636e182aa0575a241dbfc4110e9c916ae08efecbaf34c0e0c4\": container with ID starting with 1637720d41e277636e182aa0575a241dbfc4110e9c916ae08efecbaf34c0e0c4 not found: ID does not exist" containerID="1637720d41e277636e182aa0575a241dbfc4110e9c916ae08efecbaf34c0e0c4" Jan 26 21:01:02 crc kubenswrapper[4899]: I0126 21:01:02.994140 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1637720d41e277636e182aa0575a241dbfc4110e9c916ae08efecbaf34c0e0c4"} err="failed to get container status \"1637720d41e277636e182aa0575a241dbfc4110e9c916ae08efecbaf34c0e0c4\": rpc error: code = NotFound desc = could not find container \"1637720d41e277636e182aa0575a241dbfc4110e9c916ae08efecbaf34c0e0c4\": container with ID starting with 1637720d41e277636e182aa0575a241dbfc4110e9c916ae08efecbaf34c0e0c4 not found: ID does not exist" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.086130 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr"] Jan 26 21:01:04 crc kubenswrapper[4899]: E0126 21:01:04.086379 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2" containerName="route-controller-manager" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.086396 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2" containerName="route-controller-manager" Jan 26 21:01:04 crc kubenswrapper[4899]: E0126 21:01:04.086416 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a140484-8af6-4f7b-8a49-94aa897b82b0" containerName="collect-profiles" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.086425 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a140484-8af6-4f7b-8a49-94aa897b82b0" containerName="collect-profiles" Jan 26 21:01:04 crc kubenswrapper[4899]: E0126 21:01:04.086433 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="676ef23d-20dd-4ccb-b846-b83c71305d24" containerName="controller-manager" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.086440 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="676ef23d-20dd-4ccb-b846-b83c71305d24" containerName="controller-manager" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.086636 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="676ef23d-20dd-4ccb-b846-b83c71305d24" containerName="controller-manager" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.086658 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2" containerName="route-controller-manager" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.086666 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a140484-8af6-4f7b-8a49-94aa897b82b0" containerName="collect-profiles" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.087076 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.090096 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.090513 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.090572 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.090898 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.091113 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.091187 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.091433 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb"] Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.096077 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.098553 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.099068 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.099365 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.099457 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.099547 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr"] Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.099991 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.100687 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.105357 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb"] Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.108085 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.224280 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-config\") pod \"controller-manager-7b5d6b9c5d-zq2zr\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.224349 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bed03a7-6c80-402e-b084-3e345459e6ca-config\") pod \"route-controller-manager-5599cb6594-twkbb\" (UID: \"7bed03a7-6c80-402e-b084-3e345459e6ca\") " pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.224376 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bed03a7-6c80-402e-b084-3e345459e6ca-client-ca\") pod \"route-controller-manager-5599cb6594-twkbb\" (UID: \"7bed03a7-6c80-402e-b084-3e345459e6ca\") " pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.224400 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-client-ca\") pod \"controller-manager-7b5d6b9c5d-zq2zr\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.224578 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1502a431-6ef0-41b4-9536-ad1c7ccb5492-serving-cert\") pod \"controller-manager-7b5d6b9c5d-zq2zr\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.224623 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn6hs\" (UniqueName: \"kubernetes.io/projected/1502a431-6ef0-41b4-9536-ad1c7ccb5492-kube-api-access-xn6hs\") pod \"controller-manager-7b5d6b9c5d-zq2zr\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.224685 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bed03a7-6c80-402e-b084-3e345459e6ca-serving-cert\") pod \"route-controller-manager-5599cb6594-twkbb\" (UID: \"7bed03a7-6c80-402e-b084-3e345459e6ca\") " pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.224712 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-proxy-ca-bundles\") pod \"controller-manager-7b5d6b9c5d-zq2zr\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.224730 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69jlb\" (UniqueName: \"kubernetes.io/projected/7bed03a7-6c80-402e-b084-3e345459e6ca-kube-api-access-69jlb\") pod \"route-controller-manager-5599cb6594-twkbb\" (UID: \"7bed03a7-6c80-402e-b084-3e345459e6ca\") " pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.325901 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bed03a7-6c80-402e-b084-3e345459e6ca-config\") pod \"route-controller-manager-5599cb6594-twkbb\" (UID: \"7bed03a7-6c80-402e-b084-3e345459e6ca\") " pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.325988 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bed03a7-6c80-402e-b084-3e345459e6ca-client-ca\") pod \"route-controller-manager-5599cb6594-twkbb\" (UID: \"7bed03a7-6c80-402e-b084-3e345459e6ca\") " pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.326015 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-client-ca\") pod \"controller-manager-7b5d6b9c5d-zq2zr\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.326057 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1502a431-6ef0-41b4-9536-ad1c7ccb5492-serving-cert\") pod \"controller-manager-7b5d6b9c5d-zq2zr\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.326087 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn6hs\" (UniqueName: \"kubernetes.io/projected/1502a431-6ef0-41b4-9536-ad1c7ccb5492-kube-api-access-xn6hs\") pod \"controller-manager-7b5d6b9c5d-zq2zr\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.326130 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bed03a7-6c80-402e-b084-3e345459e6ca-serving-cert\") pod \"route-controller-manager-5599cb6594-twkbb\" (UID: \"7bed03a7-6c80-402e-b084-3e345459e6ca\") " pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.326156 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-proxy-ca-bundles\") pod \"controller-manager-7b5d6b9c5d-zq2zr\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.326180 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69jlb\" (UniqueName: \"kubernetes.io/projected/7bed03a7-6c80-402e-b084-3e345459e6ca-kube-api-access-69jlb\") pod \"route-controller-manager-5599cb6594-twkbb\" (UID: \"7bed03a7-6c80-402e-b084-3e345459e6ca\") " pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.326252 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-config\") pod \"controller-manager-7b5d6b9c5d-zq2zr\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.327030 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-client-ca\") pod \"controller-manager-7b5d6b9c5d-zq2zr\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.327030 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bed03a7-6c80-402e-b084-3e345459e6ca-client-ca\") pod \"route-controller-manager-5599cb6594-twkbb\" (UID: \"7bed03a7-6c80-402e-b084-3e345459e6ca\") " pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.327680 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bed03a7-6c80-402e-b084-3e345459e6ca-config\") pod \"route-controller-manager-5599cb6594-twkbb\" (UID: \"7bed03a7-6c80-402e-b084-3e345459e6ca\") " pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.328175 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-config\") pod \"controller-manager-7b5d6b9c5d-zq2zr\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.328207 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-proxy-ca-bundles\") pod \"controller-manager-7b5d6b9c5d-zq2zr\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.331878 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1502a431-6ef0-41b4-9536-ad1c7ccb5492-serving-cert\") pod \"controller-manager-7b5d6b9c5d-zq2zr\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.338066 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bed03a7-6c80-402e-b084-3e345459e6ca-serving-cert\") pod \"route-controller-manager-5599cb6594-twkbb\" (UID: \"7bed03a7-6c80-402e-b084-3e345459e6ca\") " pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.345110 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69jlb\" (UniqueName: \"kubernetes.io/projected/7bed03a7-6c80-402e-b084-3e345459e6ca-kube-api-access-69jlb\") pod \"route-controller-manager-5599cb6594-twkbb\" (UID: \"7bed03a7-6c80-402e-b084-3e345459e6ca\") " pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.356626 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn6hs\" (UniqueName: \"kubernetes.io/projected/1502a431-6ef0-41b4-9536-ad1c7ccb5492-kube-api-access-xn6hs\") pod \"controller-manager-7b5d6b9c5d-zq2zr\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.415890 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.432492 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.666876 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr"] Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.698860 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb"] Jan 26 21:01:04 crc kubenswrapper[4899]: W0126 21:01:04.705960 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bed03a7_6c80_402e_b084_3e345459e6ca.slice/crio-65ca901ff59f7829ed1fd5750635efd5e5b48b6ad7f2c876bc0138edfe10f918 WatchSource:0}: Error finding container 65ca901ff59f7829ed1fd5750635efd5e5b48b6ad7f2c876bc0138edfe10f918: Status 404 returned error can't find the container with id 65ca901ff59f7829ed1fd5750635efd5e5b48b6ad7f2c876bc0138edfe10f918 Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.939820 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="676ef23d-20dd-4ccb-b846-b83c71305d24" path="/var/lib/kubelet/pods/676ef23d-20dd-4ccb-b846-b83c71305d24/volumes" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.941232 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2" path="/var/lib/kubelet/pods/bdf4d6b4-4a71-44a2-9181-8cf3a605a7c2/volumes" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.947782 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" event={"ID":"7bed03a7-6c80-402e-b084-3e345459e6ca","Type":"ContainerStarted","Data":"5aab97894734d8430f7e4fa3dcb61c603e4ee45facfb5b309f2be74428e3ce00"} Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.947838 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" event={"ID":"7bed03a7-6c80-402e-b084-3e345459e6ca","Type":"ContainerStarted","Data":"65ca901ff59f7829ed1fd5750635efd5e5b48b6ad7f2c876bc0138edfe10f918"} Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.949177 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.950655 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" event={"ID":"1502a431-6ef0-41b4-9536-ad1c7ccb5492","Type":"ContainerStarted","Data":"951d8f9f8508a5e2bc6938bb451440c3d62d7742940244e4779cb79e20808f41"} Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.950691 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" event={"ID":"1502a431-6ef0-41b4-9536-ad1c7ccb5492","Type":"ContainerStarted","Data":"d8fe4733552f436c475a42dafc556cd7d289d2babdd25d120f69b560b957d400"} Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.951330 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.955468 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:04 crc kubenswrapper[4899]: I0126 21:01:04.965719 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" podStartSLOduration=2.9657088209999998 podStartE2EDuration="2.965708821s" podCreationTimestamp="2026-01-26 21:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:01:04.964048322 +0000 UTC m=+354.345636359" watchObservedRunningTime="2026-01-26 21:01:04.965708821 +0000 UTC m=+354.347296858" Jan 26 21:01:05 crc kubenswrapper[4899]: I0126 21:01:05.235078 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:05 crc kubenswrapper[4899]: I0126 21:01:05.259837 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" podStartSLOduration=3.259821274 podStartE2EDuration="3.259821274s" podCreationTimestamp="2026-01-26 21:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:01:04.990488626 +0000 UTC m=+354.372076673" watchObservedRunningTime="2026-01-26 21:01:05.259821274 +0000 UTC m=+354.641409311" Jan 26 21:01:05 crc kubenswrapper[4899]: I0126 21:01:05.496636 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr"] Jan 26 21:01:05 crc kubenswrapper[4899]: I0126 21:01:05.512400 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb"] Jan 26 21:01:06 crc kubenswrapper[4899]: I0126 21:01:06.961200 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" podUID="1502a431-6ef0-41b4-9536-ad1c7ccb5492" containerName="controller-manager" containerID="cri-o://951d8f9f8508a5e2bc6938bb451440c3d62d7742940244e4779cb79e20808f41" gracePeriod=30 Jan 26 21:01:06 crc kubenswrapper[4899]: I0126 21:01:06.962232 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" podUID="7bed03a7-6c80-402e-b084-3e345459e6ca" containerName="route-controller-manager" containerID="cri-o://5aab97894734d8430f7e4fa3dcb61c603e4ee45facfb5b309f2be74428e3ce00" gracePeriod=30 Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.921924 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.933136 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.951842 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-657bd675cc-5cg78"] Jan 26 21:01:07 crc kubenswrapper[4899]: E0126 21:01:07.952640 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1502a431-6ef0-41b4-9536-ad1c7ccb5492" containerName="controller-manager" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.952663 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="1502a431-6ef0-41b4-9536-ad1c7ccb5492" containerName="controller-manager" Jan 26 21:01:07 crc kubenswrapper[4899]: E0126 21:01:07.952688 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bed03a7-6c80-402e-b084-3e345459e6ca" containerName="route-controller-manager" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.952696 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bed03a7-6c80-402e-b084-3e345459e6ca" containerName="route-controller-manager" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.952825 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="1502a431-6ef0-41b4-9536-ad1c7ccb5492" containerName="controller-manager" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.952839 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bed03a7-6c80-402e-b084-3e345459e6ca" containerName="route-controller-manager" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.955503 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.959621 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-657bd675cc-5cg78"] Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.966963 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bed03a7-6c80-402e-b084-3e345459e6ca-config\") pod \"7bed03a7-6c80-402e-b084-3e345459e6ca\" (UID: \"7bed03a7-6c80-402e-b084-3e345459e6ca\") " Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.967048 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-client-ca\") pod \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.967083 4899 generic.go:334] "Generic (PLEG): container finished" podID="7bed03a7-6c80-402e-b084-3e345459e6ca" containerID="5aab97894734d8430f7e4fa3dcb61c603e4ee45facfb5b309f2be74428e3ce00" exitCode=0 Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.967108 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bed03a7-6c80-402e-b084-3e345459e6ca-client-ca\") pod \"7bed03a7-6c80-402e-b084-3e345459e6ca\" (UID: \"7bed03a7-6c80-402e-b084-3e345459e6ca\") " Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.967146 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69jlb\" (UniqueName: \"kubernetes.io/projected/7bed03a7-6c80-402e-b084-3e345459e6ca-kube-api-access-69jlb\") pod \"7bed03a7-6c80-402e-b084-3e345459e6ca\" (UID: \"7bed03a7-6c80-402e-b084-3e345459e6ca\") " Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.967181 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.967183 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bed03a7-6c80-402e-b084-3e345459e6ca-serving-cert\") pod \"7bed03a7-6c80-402e-b084-3e345459e6ca\" (UID: \"7bed03a7-6c80-402e-b084-3e345459e6ca\") " Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.967220 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1502a431-6ef0-41b4-9536-ad1c7ccb5492-serving-cert\") pod \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.967270 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" event={"ID":"7bed03a7-6c80-402e-b084-3e345459e6ca","Type":"ContainerDied","Data":"5aab97894734d8430f7e4fa3dcb61c603e4ee45facfb5b309f2be74428e3ce00"} Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.967295 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb" event={"ID":"7bed03a7-6c80-402e-b084-3e345459e6ca","Type":"ContainerDied","Data":"65ca901ff59f7829ed1fd5750635efd5e5b48b6ad7f2c876bc0138edfe10f918"} Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.967311 4899 scope.go:117] "RemoveContainer" containerID="5aab97894734d8430f7e4fa3dcb61c603e4ee45facfb5b309f2be74428e3ce00" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.967366 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-proxy-ca-bundles\") pod \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.967425 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-config\") pod \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.967461 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn6hs\" (UniqueName: \"kubernetes.io/projected/1502a431-6ef0-41b4-9536-ad1c7ccb5492-kube-api-access-xn6hs\") pod \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\" (UID: \"1502a431-6ef0-41b4-9536-ad1c7ccb5492\") " Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.967543 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bed03a7-6c80-402e-b084-3e345459e6ca-config" (OuterVolumeSpecName: "config") pod "7bed03a7-6c80-402e-b084-3e345459e6ca" (UID: "7bed03a7-6c80-402e-b084-3e345459e6ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.967762 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bed03a7-6c80-402e-b084-3e345459e6ca-config\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.969098 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-client-ca" (OuterVolumeSpecName: "client-ca") pod "1502a431-6ef0-41b4-9536-ad1c7ccb5492" (UID: "1502a431-6ef0-41b4-9536-ad1c7ccb5492"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.971318 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1502a431-6ef0-41b4-9536-ad1c7ccb5492" (UID: "1502a431-6ef0-41b4-9536-ad1c7ccb5492"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.971383 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bed03a7-6c80-402e-b084-3e345459e6ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "7bed03a7-6c80-402e-b084-3e345459e6ca" (UID: "7bed03a7-6c80-402e-b084-3e345459e6ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.971540 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-config" (OuterVolumeSpecName: "config") pod "1502a431-6ef0-41b4-9536-ad1c7ccb5492" (UID: "1502a431-6ef0-41b4-9536-ad1c7ccb5492"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.974003 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1502a431-6ef0-41b4-9536-ad1c7ccb5492-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1502a431-6ef0-41b4-9536-ad1c7ccb5492" (UID: "1502a431-6ef0-41b4-9536-ad1c7ccb5492"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.974255 4899 generic.go:334] "Generic (PLEG): container finished" podID="1502a431-6ef0-41b4-9536-ad1c7ccb5492" containerID="951d8f9f8508a5e2bc6938bb451440c3d62d7742940244e4779cb79e20808f41" exitCode=0 Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.974313 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" event={"ID":"1502a431-6ef0-41b4-9536-ad1c7ccb5492","Type":"ContainerDied","Data":"951d8f9f8508a5e2bc6938bb451440c3d62d7742940244e4779cb79e20808f41"} Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.974322 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bed03a7-6c80-402e-b084-3e345459e6ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7bed03a7-6c80-402e-b084-3e345459e6ca" (UID: "7bed03a7-6c80-402e-b084-3e345459e6ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.974347 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.974348 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr" event={"ID":"1502a431-6ef0-41b4-9536-ad1c7ccb5492","Type":"ContainerDied","Data":"d8fe4733552f436c475a42dafc556cd7d289d2babdd25d120f69b560b957d400"} Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.977487 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bed03a7-6c80-402e-b084-3e345459e6ca-kube-api-access-69jlb" (OuterVolumeSpecName: "kube-api-access-69jlb") pod "7bed03a7-6c80-402e-b084-3e345459e6ca" (UID: "7bed03a7-6c80-402e-b084-3e345459e6ca"). InnerVolumeSpecName "kube-api-access-69jlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:01:07 crc kubenswrapper[4899]: I0126 21:01:07.976710 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1502a431-6ef0-41b4-9536-ad1c7ccb5492-kube-api-access-xn6hs" (OuterVolumeSpecName: "kube-api-access-xn6hs") pod "1502a431-6ef0-41b4-9536-ad1c7ccb5492" (UID: "1502a431-6ef0-41b4-9536-ad1c7ccb5492"). InnerVolumeSpecName "kube-api-access-xn6hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.007711 4899 scope.go:117] "RemoveContainer" containerID="5aab97894734d8430f7e4fa3dcb61c603e4ee45facfb5b309f2be74428e3ce00" Jan 26 21:01:08 crc kubenswrapper[4899]: E0126 21:01:08.008267 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aab97894734d8430f7e4fa3dcb61c603e4ee45facfb5b309f2be74428e3ce00\": container with ID starting with 5aab97894734d8430f7e4fa3dcb61c603e4ee45facfb5b309f2be74428e3ce00 not found: ID does not exist" containerID="5aab97894734d8430f7e4fa3dcb61c603e4ee45facfb5b309f2be74428e3ce00" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.008321 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aab97894734d8430f7e4fa3dcb61c603e4ee45facfb5b309f2be74428e3ce00"} err="failed to get container status \"5aab97894734d8430f7e4fa3dcb61c603e4ee45facfb5b309f2be74428e3ce00\": rpc error: code = NotFound desc = could not find container \"5aab97894734d8430f7e4fa3dcb61c603e4ee45facfb5b309f2be74428e3ce00\": container with ID starting with 5aab97894734d8430f7e4fa3dcb61c603e4ee45facfb5b309f2be74428e3ce00 not found: ID does not exist" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.008353 4899 scope.go:117] "RemoveContainer" containerID="951d8f9f8508a5e2bc6938bb451440c3d62d7742940244e4779cb79e20808f41" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.021972 4899 scope.go:117] "RemoveContainer" containerID="951d8f9f8508a5e2bc6938bb451440c3d62d7742940244e4779cb79e20808f41" Jan 26 21:01:08 crc kubenswrapper[4899]: E0126 21:01:08.022894 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"951d8f9f8508a5e2bc6938bb451440c3d62d7742940244e4779cb79e20808f41\": container with ID starting with 951d8f9f8508a5e2bc6938bb451440c3d62d7742940244e4779cb79e20808f41 not found: ID does not exist" containerID="951d8f9f8508a5e2bc6938bb451440c3d62d7742940244e4779cb79e20808f41" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.023407 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"951d8f9f8508a5e2bc6938bb451440c3d62d7742940244e4779cb79e20808f41"} err="failed to get container status \"951d8f9f8508a5e2bc6938bb451440c3d62d7742940244e4779cb79e20808f41\": rpc error: code = NotFound desc = could not find container \"951d8f9f8508a5e2bc6938bb451440c3d62d7742940244e4779cb79e20808f41\": container with ID starting with 951d8f9f8508a5e2bc6938bb451440c3d62d7742940244e4779cb79e20808f41 not found: ID does not exist" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.068830 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-config\") pod \"controller-manager-657bd675cc-5cg78\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.068912 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9c2d\" (UniqueName: \"kubernetes.io/projected/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-kube-api-access-d9c2d\") pod \"controller-manager-657bd675cc-5cg78\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.068970 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-proxy-ca-bundles\") pod \"controller-manager-657bd675cc-5cg78\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.068987 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-serving-cert\") pod \"controller-manager-657bd675cc-5cg78\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.069015 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-client-ca\") pod \"controller-manager-657bd675cc-5cg78\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.069122 4899 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.069151 4899 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bed03a7-6c80-402e-b084-3e345459e6ca-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.069167 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69jlb\" (UniqueName: \"kubernetes.io/projected/7bed03a7-6c80-402e-b084-3e345459e6ca-kube-api-access-69jlb\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.069181 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bed03a7-6c80-402e-b084-3e345459e6ca-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.069194 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1502a431-6ef0-41b4-9536-ad1c7ccb5492-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.069208 4899 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.069222 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1502a431-6ef0-41b4-9536-ad1c7ccb5492-config\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.069234 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn6hs\" (UniqueName: \"kubernetes.io/projected/1502a431-6ef0-41b4-9536-ad1c7ccb5492-kube-api-access-xn6hs\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.170803 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-client-ca\") pod \"controller-manager-657bd675cc-5cg78\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.170864 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-config\") pod \"controller-manager-657bd675cc-5cg78\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.170916 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9c2d\" (UniqueName: \"kubernetes.io/projected/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-kube-api-access-d9c2d\") pod \"controller-manager-657bd675cc-5cg78\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.170958 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-proxy-ca-bundles\") pod \"controller-manager-657bd675cc-5cg78\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.170975 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-serving-cert\") pod \"controller-manager-657bd675cc-5cg78\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.171981 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-client-ca\") pod \"controller-manager-657bd675cc-5cg78\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.172481 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-proxy-ca-bundles\") pod \"controller-manager-657bd675cc-5cg78\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.174182 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-serving-cert\") pod \"controller-manager-657bd675cc-5cg78\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.177034 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-config\") pod \"controller-manager-657bd675cc-5cg78\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.205496 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9c2d\" (UniqueName: \"kubernetes.io/projected/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-kube-api-access-d9c2d\") pod \"controller-manager-657bd675cc-5cg78\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.299789 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.301337 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb"] Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.306009 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5599cb6594-twkbb"] Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.314960 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr"] Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.322400 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7b5d6b9c5d-zq2zr"] Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.516298 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-657bd675cc-5cg78"] Jan 26 21:01:08 crc kubenswrapper[4899]: W0126 21:01:08.527214 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc29c76c5_d4ba_4bc9_a390_a91c9f9cd102.slice/crio-b016e91cbfd47bbaca1511ddbf514cf1aa53b20194834c0a6e2e2d7c75769a4d WatchSource:0}: Error finding container b016e91cbfd47bbaca1511ddbf514cf1aa53b20194834c0a6e2e2d7c75769a4d: Status 404 returned error can't find the container with id b016e91cbfd47bbaca1511ddbf514cf1aa53b20194834c0a6e2e2d7c75769a4d Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.939595 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1502a431-6ef0-41b4-9536-ad1c7ccb5492" path="/var/lib/kubelet/pods/1502a431-6ef0-41b4-9536-ad1c7ccb5492/volumes" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.941106 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bed03a7-6c80-402e-b084-3e345459e6ca" path="/var/lib/kubelet/pods/7bed03a7-6c80-402e-b084-3e345459e6ca/volumes" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.982534 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" event={"ID":"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102","Type":"ContainerStarted","Data":"f9bbe205f23c8dd0e52464e3c109fb0315bbc2b0ae5e0115bad81f43f499a706"} Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.982571 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" event={"ID":"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102","Type":"ContainerStarted","Data":"b016e91cbfd47bbaca1511ddbf514cf1aa53b20194834c0a6e2e2d7c75769a4d"} Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.984368 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:08 crc kubenswrapper[4899]: I0126 21:01:08.993607 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:09 crc kubenswrapper[4899]: I0126 21:01:09.026326 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" podStartSLOduration=4.026309076 podStartE2EDuration="4.026309076s" podCreationTimestamp="2026-01-26 21:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:01:09.010562689 +0000 UTC m=+358.392150726" watchObservedRunningTime="2026-01-26 21:01:09.026309076 +0000 UTC m=+358.407897113" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.092636 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5"] Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.094517 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.097135 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.097700 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.097997 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.098062 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.098171 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.098179 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.100093 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5"] Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.196073 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284736ad-06c3-4c78-8bb2-d271ca7e2a70-config\") pod \"route-controller-manager-689656fd7f-tpmt5\" (UID: \"284736ad-06c3-4c78-8bb2-d271ca7e2a70\") " pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.196113 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g79dx\" (UniqueName: \"kubernetes.io/projected/284736ad-06c3-4c78-8bb2-d271ca7e2a70-kube-api-access-g79dx\") pod \"route-controller-manager-689656fd7f-tpmt5\" (UID: \"284736ad-06c3-4c78-8bb2-d271ca7e2a70\") " pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.196264 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284736ad-06c3-4c78-8bb2-d271ca7e2a70-client-ca\") pod \"route-controller-manager-689656fd7f-tpmt5\" (UID: \"284736ad-06c3-4c78-8bb2-d271ca7e2a70\") " pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.196475 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284736ad-06c3-4c78-8bb2-d271ca7e2a70-serving-cert\") pod \"route-controller-manager-689656fd7f-tpmt5\" (UID: \"284736ad-06c3-4c78-8bb2-d271ca7e2a70\") " pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.297873 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284736ad-06c3-4c78-8bb2-d271ca7e2a70-serving-cert\") pod \"route-controller-manager-689656fd7f-tpmt5\" (UID: \"284736ad-06c3-4c78-8bb2-d271ca7e2a70\") " pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.298090 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284736ad-06c3-4c78-8bb2-d271ca7e2a70-config\") pod \"route-controller-manager-689656fd7f-tpmt5\" (UID: \"284736ad-06c3-4c78-8bb2-d271ca7e2a70\") " pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.298169 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g79dx\" (UniqueName: \"kubernetes.io/projected/284736ad-06c3-4c78-8bb2-d271ca7e2a70-kube-api-access-g79dx\") pod \"route-controller-manager-689656fd7f-tpmt5\" (UID: \"284736ad-06c3-4c78-8bb2-d271ca7e2a70\") " pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.298307 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284736ad-06c3-4c78-8bb2-d271ca7e2a70-client-ca\") pod \"route-controller-manager-689656fd7f-tpmt5\" (UID: \"284736ad-06c3-4c78-8bb2-d271ca7e2a70\") " pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.299644 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284736ad-06c3-4c78-8bb2-d271ca7e2a70-client-ca\") pod \"route-controller-manager-689656fd7f-tpmt5\" (UID: \"284736ad-06c3-4c78-8bb2-d271ca7e2a70\") " pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.299749 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284736ad-06c3-4c78-8bb2-d271ca7e2a70-config\") pod \"route-controller-manager-689656fd7f-tpmt5\" (UID: \"284736ad-06c3-4c78-8bb2-d271ca7e2a70\") " pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.311536 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284736ad-06c3-4c78-8bb2-d271ca7e2a70-serving-cert\") pod \"route-controller-manager-689656fd7f-tpmt5\" (UID: \"284736ad-06c3-4c78-8bb2-d271ca7e2a70\") " pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.317081 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g79dx\" (UniqueName: \"kubernetes.io/projected/284736ad-06c3-4c78-8bb2-d271ca7e2a70-kube-api-access-g79dx\") pod \"route-controller-manager-689656fd7f-tpmt5\" (UID: \"284736ad-06c3-4c78-8bb2-d271ca7e2a70\") " pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.430873 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" Jan 26 21:01:10 crc kubenswrapper[4899]: I0126 21:01:10.884874 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5"] Jan 26 21:01:11 crc kubenswrapper[4899]: I0126 21:01:11.002438 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" event={"ID":"284736ad-06c3-4c78-8bb2-d271ca7e2a70","Type":"ContainerStarted","Data":"2424304638d2eeba0cde28e2d07a4daf9175710e66bc0ee2ef5da13d49023687"} Jan 26 21:01:12 crc kubenswrapper[4899]: I0126 21:01:12.008011 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" event={"ID":"284736ad-06c3-4c78-8bb2-d271ca7e2a70","Type":"ContainerStarted","Data":"9f2a3373f56d7247ed912fd9a43f1f92ddcce02a3537806002133a2f32c2b151"} Jan 26 21:01:12 crc kubenswrapper[4899]: I0126 21:01:12.008421 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" Jan 26 21:01:12 crc kubenswrapper[4899]: I0126 21:01:12.018395 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" Jan 26 21:01:12 crc kubenswrapper[4899]: I0126 21:01:12.036373 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-689656fd7f-tpmt5" podStartSLOduration=7.036353611 podStartE2EDuration="7.036353611s" podCreationTimestamp="2026-01-26 21:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:01:12.033149416 +0000 UTC m=+361.414737463" watchObservedRunningTime="2026-01-26 21:01:12.036353611 +0000 UTC m=+361.417941648" Jan 26 21:01:12 crc kubenswrapper[4899]: I0126 21:01:12.928555 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-657bd675cc-5cg78"] Jan 26 21:01:12 crc kubenswrapper[4899]: I0126 21:01:12.928813 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" podUID="c29c76c5-d4ba-4bc9-a390-a91c9f9cd102" containerName="controller-manager" containerID="cri-o://f9bbe205f23c8dd0e52464e3c109fb0315bbc2b0ae5e0115bad81f43f499a706" gracePeriod=30 Jan 26 21:01:13 crc kubenswrapper[4899]: I0126 21:01:13.394255 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:13 crc kubenswrapper[4899]: I0126 21:01:13.541021 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9c2d\" (UniqueName: \"kubernetes.io/projected/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-kube-api-access-d9c2d\") pod \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " Jan 26 21:01:13 crc kubenswrapper[4899]: I0126 21:01:13.541079 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-serving-cert\") pod \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " Jan 26 21:01:13 crc kubenswrapper[4899]: I0126 21:01:13.541113 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-proxy-ca-bundles\") pod \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " Jan 26 21:01:13 crc kubenswrapper[4899]: I0126 21:01:13.541187 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-client-ca\") pod \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " Jan 26 21:01:13 crc kubenswrapper[4899]: I0126 21:01:13.541250 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-config\") pod \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\" (UID: \"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102\") " Jan 26 21:01:13 crc kubenswrapper[4899]: I0126 21:01:13.542781 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c29c76c5-d4ba-4bc9-a390-a91c9f9cd102" (UID: "c29c76c5-d4ba-4bc9-a390-a91c9f9cd102"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:01:13 crc kubenswrapper[4899]: I0126 21:01:13.543189 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-client-ca" (OuterVolumeSpecName: "client-ca") pod "c29c76c5-d4ba-4bc9-a390-a91c9f9cd102" (UID: "c29c76c5-d4ba-4bc9-a390-a91c9f9cd102"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:01:13 crc kubenswrapper[4899]: I0126 21:01:13.543326 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-config" (OuterVolumeSpecName: "config") pod "c29c76c5-d4ba-4bc9-a390-a91c9f9cd102" (UID: "c29c76c5-d4ba-4bc9-a390-a91c9f9cd102"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:01:13 crc kubenswrapper[4899]: I0126 21:01:13.550124 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c29c76c5-d4ba-4bc9-a390-a91c9f9cd102" (UID: "c29c76c5-d4ba-4bc9-a390-a91c9f9cd102"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:01:13 crc kubenswrapper[4899]: I0126 21:01:13.550134 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-kube-api-access-d9c2d" (OuterVolumeSpecName: "kube-api-access-d9c2d") pod "c29c76c5-d4ba-4bc9-a390-a91c9f9cd102" (UID: "c29c76c5-d4ba-4bc9-a390-a91c9f9cd102"). InnerVolumeSpecName "kube-api-access-d9c2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:01:13 crc kubenswrapper[4899]: I0126 21:01:13.642167 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-config\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:13 crc kubenswrapper[4899]: I0126 21:01:13.642193 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9c2d\" (UniqueName: \"kubernetes.io/projected/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-kube-api-access-d9c2d\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:13 crc kubenswrapper[4899]: I0126 21:01:13.642205 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:13 crc kubenswrapper[4899]: I0126 21:01:13.642215 4899 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:13 crc kubenswrapper[4899]: I0126 21:01:13.642224 4899 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.019272 4899 generic.go:334] "Generic (PLEG): container finished" podID="c29c76c5-d4ba-4bc9-a390-a91c9f9cd102" containerID="f9bbe205f23c8dd0e52464e3c109fb0315bbc2b0ae5e0115bad81f43f499a706" exitCode=0 Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.019314 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.019379 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" event={"ID":"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102","Type":"ContainerDied","Data":"f9bbe205f23c8dd0e52464e3c109fb0315bbc2b0ae5e0115bad81f43f499a706"} Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.019462 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-657bd675cc-5cg78" event={"ID":"c29c76c5-d4ba-4bc9-a390-a91c9f9cd102","Type":"ContainerDied","Data":"b016e91cbfd47bbaca1511ddbf514cf1aa53b20194834c0a6e2e2d7c75769a4d"} Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.019493 4899 scope.go:117] "RemoveContainer" containerID="f9bbe205f23c8dd0e52464e3c109fb0315bbc2b0ae5e0115bad81f43f499a706" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.042183 4899 scope.go:117] "RemoveContainer" containerID="f9bbe205f23c8dd0e52464e3c109fb0315bbc2b0ae5e0115bad81f43f499a706" Jan 26 21:01:14 crc kubenswrapper[4899]: E0126 21:01:14.042840 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9bbe205f23c8dd0e52464e3c109fb0315bbc2b0ae5e0115bad81f43f499a706\": container with ID starting with f9bbe205f23c8dd0e52464e3c109fb0315bbc2b0ae5e0115bad81f43f499a706 not found: ID does not exist" containerID="f9bbe205f23c8dd0e52464e3c109fb0315bbc2b0ae5e0115bad81f43f499a706" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.042895 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9bbe205f23c8dd0e52464e3c109fb0315bbc2b0ae5e0115bad81f43f499a706"} err="failed to get container status \"f9bbe205f23c8dd0e52464e3c109fb0315bbc2b0ae5e0115bad81f43f499a706\": rpc error: code = NotFound desc = could not find container \"f9bbe205f23c8dd0e52464e3c109fb0315bbc2b0ae5e0115bad81f43f499a706\": container with ID starting with f9bbe205f23c8dd0e52464e3c109fb0315bbc2b0ae5e0115bad81f43f499a706 not found: ID does not exist" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.051714 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-657bd675cc-5cg78"] Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.057731 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-657bd675cc-5cg78"] Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.096819 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75bb457f55-547vw"] Jan 26 21:01:14 crc kubenswrapper[4899]: E0126 21:01:14.097176 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29c76c5-d4ba-4bc9-a390-a91c9f9cd102" containerName="controller-manager" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.097209 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29c76c5-d4ba-4bc9-a390-a91c9f9cd102" containerName="controller-manager" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.097385 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29c76c5-d4ba-4bc9-a390-a91c9f9cd102" containerName="controller-manager" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.098017 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.105966 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.106341 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.106609 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.106896 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.113850 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.117260 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.117833 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.126954 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75bb457f55-547vw"] Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.248904 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-client-ca\") pod \"controller-manager-75bb457f55-547vw\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.249130 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63affb67-6031-4706-91d7-ad08b9512482-serving-cert\") pod \"controller-manager-75bb457f55-547vw\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.249267 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-proxy-ca-bundles\") pod \"controller-manager-75bb457f55-547vw\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.249327 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-config\") pod \"controller-manager-75bb457f55-547vw\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.249352 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n77g\" (UniqueName: \"kubernetes.io/projected/63affb67-6031-4706-91d7-ad08b9512482-kube-api-access-7n77g\") pod \"controller-manager-75bb457f55-547vw\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.350337 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-proxy-ca-bundles\") pod \"controller-manager-75bb457f55-547vw\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.350396 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-config\") pod \"controller-manager-75bb457f55-547vw\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.350420 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n77g\" (UniqueName: \"kubernetes.io/projected/63affb67-6031-4706-91d7-ad08b9512482-kube-api-access-7n77g\") pod \"controller-manager-75bb457f55-547vw\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.350482 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-client-ca\") pod \"controller-manager-75bb457f55-547vw\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.350515 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63affb67-6031-4706-91d7-ad08b9512482-serving-cert\") pod \"controller-manager-75bb457f55-547vw\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.351497 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-client-ca\") pod \"controller-manager-75bb457f55-547vw\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.351799 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-config\") pod \"controller-manager-75bb457f55-547vw\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.352340 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-proxy-ca-bundles\") pod \"controller-manager-75bb457f55-547vw\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.358628 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63affb67-6031-4706-91d7-ad08b9512482-serving-cert\") pod \"controller-manager-75bb457f55-547vw\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.391341 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n77g\" (UniqueName: \"kubernetes.io/projected/63affb67-6031-4706-91d7-ad08b9512482-kube-api-access-7n77g\") pod \"controller-manager-75bb457f55-547vw\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.428663 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.682216 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75bb457f55-547vw"] Jan 26 21:01:14 crc kubenswrapper[4899]: W0126 21:01:14.691245 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63affb67_6031_4706_91d7_ad08b9512482.slice/crio-5724bc25fccc4145d5341fbfab0923716712d6328cb3657fa91d678cc392615d WatchSource:0}: Error finding container 5724bc25fccc4145d5341fbfab0923716712d6328cb3657fa91d678cc392615d: Status 404 returned error can't find the container with id 5724bc25fccc4145d5341fbfab0923716712d6328cb3657fa91d678cc392615d Jan 26 21:01:14 crc kubenswrapper[4899]: I0126 21:01:14.938460 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29c76c5-d4ba-4bc9-a390-a91c9f9cd102" path="/var/lib/kubelet/pods/c29c76c5-d4ba-4bc9-a390-a91c9f9cd102/volumes" Jan 26 21:01:15 crc kubenswrapper[4899]: I0126 21:01:15.027747 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" event={"ID":"63affb67-6031-4706-91d7-ad08b9512482","Type":"ContainerStarted","Data":"6aa2dc729901d2a68981343aa6e044df899eefaed4c5da41d92bbb4bc213c2e8"} Jan 26 21:01:15 crc kubenswrapper[4899]: I0126 21:01:15.027820 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" event={"ID":"63affb67-6031-4706-91d7-ad08b9512482","Type":"ContainerStarted","Data":"5724bc25fccc4145d5341fbfab0923716712d6328cb3657fa91d678cc392615d"} Jan 26 21:01:15 crc kubenswrapper[4899]: I0126 21:01:15.028054 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:15 crc kubenswrapper[4899]: I0126 21:01:15.034196 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:15 crc kubenswrapper[4899]: I0126 21:01:15.052744 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" podStartSLOduration=3.052728094 podStartE2EDuration="3.052728094s" podCreationTimestamp="2026-01-26 21:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:01:15.050881529 +0000 UTC m=+364.432469626" watchObservedRunningTime="2026-01-26 21:01:15.052728094 +0000 UTC m=+364.434316131" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.180276 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-v9kb5"] Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.182412 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.213713 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-v9kb5"] Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.348780 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/05f02d29-9c73-41b2-93a1-2c998cca11b6-registry-tls\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.348851 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.348888 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/05f02d29-9c73-41b2-93a1-2c998cca11b6-registry-certificates\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.348912 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c77nj\" (UniqueName: \"kubernetes.io/projected/05f02d29-9c73-41b2-93a1-2c998cca11b6-kube-api-access-c77nj\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.348950 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05f02d29-9c73-41b2-93a1-2c998cca11b6-trusted-ca\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.349122 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/05f02d29-9c73-41b2-93a1-2c998cca11b6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.349182 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05f02d29-9c73-41b2-93a1-2c998cca11b6-bound-sa-token\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.349260 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/05f02d29-9c73-41b2-93a1-2c998cca11b6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.377910 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.450838 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/05f02d29-9c73-41b2-93a1-2c998cca11b6-registry-certificates\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.450900 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c77nj\" (UniqueName: \"kubernetes.io/projected/05f02d29-9c73-41b2-93a1-2c998cca11b6-kube-api-access-c77nj\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.450979 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05f02d29-9c73-41b2-93a1-2c998cca11b6-trusted-ca\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.451028 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/05f02d29-9c73-41b2-93a1-2c998cca11b6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.451054 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05f02d29-9c73-41b2-93a1-2c998cca11b6-bound-sa-token\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.451094 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/05f02d29-9c73-41b2-93a1-2c998cca11b6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.451138 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/05f02d29-9c73-41b2-93a1-2c998cca11b6-registry-tls\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.452150 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/05f02d29-9c73-41b2-93a1-2c998cca11b6-registry-certificates\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.452507 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/05f02d29-9c73-41b2-93a1-2c998cca11b6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.454484 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05f02d29-9c73-41b2-93a1-2c998cca11b6-trusted-ca\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.457864 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/05f02d29-9c73-41b2-93a1-2c998cca11b6-registry-tls\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.459053 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/05f02d29-9c73-41b2-93a1-2c998cca11b6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.470824 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c77nj\" (UniqueName: \"kubernetes.io/projected/05f02d29-9c73-41b2-93a1-2c998cca11b6-kube-api-access-c77nj\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.472759 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05f02d29-9c73-41b2-93a1-2c998cca11b6-bound-sa-token\") pod \"image-registry-66df7c8f76-v9kb5\" (UID: \"05f02d29-9c73-41b2-93a1-2c998cca11b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.504082 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:29 crc kubenswrapper[4899]: I0126 21:01:29.954284 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-v9kb5"] Jan 26 21:01:30 crc kubenswrapper[4899]: I0126 21:01:30.109626 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:01:30 crc kubenswrapper[4899]: I0126 21:01:30.110267 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:01:30 crc kubenswrapper[4899]: I0126 21:01:30.118196 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" event={"ID":"05f02d29-9c73-41b2-93a1-2c998cca11b6","Type":"ContainerStarted","Data":"d74df6bfa93b6d1a56ada52e9e7ca0622b99da54665045740f1cd005f595ca64"} Jan 26 21:01:30 crc kubenswrapper[4899]: I0126 21:01:30.118271 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" event={"ID":"05f02d29-9c73-41b2-93a1-2c998cca11b6","Type":"ContainerStarted","Data":"06a200a51304561d3537ded7b50d09bb986f66e275103d49841871fd6543e5b4"} Jan 26 21:01:31 crc kubenswrapper[4899]: I0126 21:01:31.123785 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:31 crc kubenswrapper[4899]: I0126 21:01:31.157192 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" podStartSLOduration=2.157147609 podStartE2EDuration="2.157147609s" podCreationTimestamp="2026-01-26 21:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:01:31.14741886 +0000 UTC m=+380.529006917" watchObservedRunningTime="2026-01-26 21:01:31.157147609 +0000 UTC m=+380.538735696" Jan 26 21:01:32 crc kubenswrapper[4899]: I0126 21:01:32.482004 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rq8lx"] Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.427016 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9m2hr"] Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.429119 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9m2hr" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.432634 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.449227 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9m2hr"] Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.574201 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ff9111-7a25-4b47-adb6-4e765311e6d9-catalog-content\") pod \"redhat-marketplace-9m2hr\" (UID: \"67ff9111-7a25-4b47-adb6-4e765311e6d9\") " pod="openshift-marketplace/redhat-marketplace-9m2hr" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.574471 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx8gd\" (UniqueName: \"kubernetes.io/projected/67ff9111-7a25-4b47-adb6-4e765311e6d9-kube-api-access-nx8gd\") pod \"redhat-marketplace-9m2hr\" (UID: \"67ff9111-7a25-4b47-adb6-4e765311e6d9\") " pod="openshift-marketplace/redhat-marketplace-9m2hr" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.574541 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ff9111-7a25-4b47-adb6-4e765311e6d9-utilities\") pod \"redhat-marketplace-9m2hr\" (UID: \"67ff9111-7a25-4b47-adb6-4e765311e6d9\") " pod="openshift-marketplace/redhat-marketplace-9m2hr" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.624758 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jkhqw"] Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.626353 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkhqw" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.628596 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.640869 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jkhqw"] Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.676709 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx8gd\" (UniqueName: \"kubernetes.io/projected/67ff9111-7a25-4b47-adb6-4e765311e6d9-kube-api-access-nx8gd\") pod \"redhat-marketplace-9m2hr\" (UID: \"67ff9111-7a25-4b47-adb6-4e765311e6d9\") " pod="openshift-marketplace/redhat-marketplace-9m2hr" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.676779 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ff9111-7a25-4b47-adb6-4e765311e6d9-utilities\") pod \"redhat-marketplace-9m2hr\" (UID: \"67ff9111-7a25-4b47-adb6-4e765311e6d9\") " pod="openshift-marketplace/redhat-marketplace-9m2hr" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.676860 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ff9111-7a25-4b47-adb6-4e765311e6d9-catalog-content\") pod \"redhat-marketplace-9m2hr\" (UID: \"67ff9111-7a25-4b47-adb6-4e765311e6d9\") " pod="openshift-marketplace/redhat-marketplace-9m2hr" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.678137 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ff9111-7a25-4b47-adb6-4e765311e6d9-catalog-content\") pod \"redhat-marketplace-9m2hr\" (UID: \"67ff9111-7a25-4b47-adb6-4e765311e6d9\") " pod="openshift-marketplace/redhat-marketplace-9m2hr" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.678537 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ff9111-7a25-4b47-adb6-4e765311e6d9-utilities\") pod \"redhat-marketplace-9m2hr\" (UID: \"67ff9111-7a25-4b47-adb6-4e765311e6d9\") " pod="openshift-marketplace/redhat-marketplace-9m2hr" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.702537 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx8gd\" (UniqueName: \"kubernetes.io/projected/67ff9111-7a25-4b47-adb6-4e765311e6d9-kube-api-access-nx8gd\") pod \"redhat-marketplace-9m2hr\" (UID: \"67ff9111-7a25-4b47-adb6-4e765311e6d9\") " pod="openshift-marketplace/redhat-marketplace-9m2hr" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.757471 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9m2hr" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.778557 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6215d320-2289-4e53-9c43-466c52516a43-utilities\") pod \"community-operators-jkhqw\" (UID: \"6215d320-2289-4e53-9c43-466c52516a43\") " pod="openshift-marketplace/community-operators-jkhqw" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.778824 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6215d320-2289-4e53-9c43-466c52516a43-catalog-content\") pod \"community-operators-jkhqw\" (UID: \"6215d320-2289-4e53-9c43-466c52516a43\") " pod="openshift-marketplace/community-operators-jkhqw" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.778891 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcdv5\" (UniqueName: \"kubernetes.io/projected/6215d320-2289-4e53-9c43-466c52516a43-kube-api-access-xcdv5\") pod \"community-operators-jkhqw\" (UID: \"6215d320-2289-4e53-9c43-466c52516a43\") " pod="openshift-marketplace/community-operators-jkhqw" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.880063 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6215d320-2289-4e53-9c43-466c52516a43-catalog-content\") pod \"community-operators-jkhqw\" (UID: \"6215d320-2289-4e53-9c43-466c52516a43\") " pod="openshift-marketplace/community-operators-jkhqw" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.880119 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcdv5\" (UniqueName: \"kubernetes.io/projected/6215d320-2289-4e53-9c43-466c52516a43-kube-api-access-xcdv5\") pod \"community-operators-jkhqw\" (UID: \"6215d320-2289-4e53-9c43-466c52516a43\") " pod="openshift-marketplace/community-operators-jkhqw" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.880160 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6215d320-2289-4e53-9c43-466c52516a43-utilities\") pod \"community-operators-jkhqw\" (UID: \"6215d320-2289-4e53-9c43-466c52516a43\") " pod="openshift-marketplace/community-operators-jkhqw" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.880923 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6215d320-2289-4e53-9c43-466c52516a43-catalog-content\") pod \"community-operators-jkhqw\" (UID: \"6215d320-2289-4e53-9c43-466c52516a43\") " pod="openshift-marketplace/community-operators-jkhqw" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.880962 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6215d320-2289-4e53-9c43-466c52516a43-utilities\") pod \"community-operators-jkhqw\" (UID: \"6215d320-2289-4e53-9c43-466c52516a43\") " pod="openshift-marketplace/community-operators-jkhqw" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.914828 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcdv5\" (UniqueName: \"kubernetes.io/projected/6215d320-2289-4e53-9c43-466c52516a43-kube-api-access-xcdv5\") pod \"community-operators-jkhqw\" (UID: \"6215d320-2289-4e53-9c43-466c52516a43\") " pod="openshift-marketplace/community-operators-jkhqw" Jan 26 21:01:37 crc kubenswrapper[4899]: I0126 21:01:37.943155 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkhqw" Jan 26 21:01:38 crc kubenswrapper[4899]: I0126 21:01:38.281348 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9m2hr"] Jan 26 21:01:38 crc kubenswrapper[4899]: I0126 21:01:38.326483 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jkhqw"] Jan 26 21:01:38 crc kubenswrapper[4899]: W0126 21:01:38.328820 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6215d320_2289_4e53_9c43_466c52516a43.slice/crio-a3e90629de6e252edc0f76775b3d753206709ea07e62e0f7c86da823de265fcc WatchSource:0}: Error finding container a3e90629de6e252edc0f76775b3d753206709ea07e62e0f7c86da823de265fcc: Status 404 returned error can't find the container with id a3e90629de6e252edc0f76775b3d753206709ea07e62e0f7c86da823de265fcc Jan 26 21:01:39 crc kubenswrapper[4899]: I0126 21:01:39.280567 4899 generic.go:334] "Generic (PLEG): container finished" podID="67ff9111-7a25-4b47-adb6-4e765311e6d9" containerID="e33b72c0eda6bcf3b96b7e4ad8c3f67134a6077a5642c55d8e60c8bdc1646237" exitCode=0 Jan 26 21:01:39 crc kubenswrapper[4899]: I0126 21:01:39.281074 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9m2hr" event={"ID":"67ff9111-7a25-4b47-adb6-4e765311e6d9","Type":"ContainerDied","Data":"e33b72c0eda6bcf3b96b7e4ad8c3f67134a6077a5642c55d8e60c8bdc1646237"} Jan 26 21:01:39 crc kubenswrapper[4899]: I0126 21:01:39.281205 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9m2hr" event={"ID":"67ff9111-7a25-4b47-adb6-4e765311e6d9","Type":"ContainerStarted","Data":"2b85edf635fb22f1c3562fa6efd007a2e3190b2ca9b2fd07c08bd0f361f9e9a4"} Jan 26 21:01:39 crc kubenswrapper[4899]: I0126 21:01:39.283434 4899 generic.go:334] "Generic (PLEG): container finished" podID="6215d320-2289-4e53-9c43-466c52516a43" containerID="8e641d01c2e7216fe439fb7111f5cf2b07664db60d8921d1881f78d3dd360251" exitCode=0 Jan 26 21:01:39 crc kubenswrapper[4899]: I0126 21:01:39.283474 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkhqw" event={"ID":"6215d320-2289-4e53-9c43-466c52516a43","Type":"ContainerDied","Data":"8e641d01c2e7216fe439fb7111f5cf2b07664db60d8921d1881f78d3dd360251"} Jan 26 21:01:39 crc kubenswrapper[4899]: I0126 21:01:39.283498 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkhqw" event={"ID":"6215d320-2289-4e53-9c43-466c52516a43","Type":"ContainerStarted","Data":"a3e90629de6e252edc0f76775b3d753206709ea07e62e0f7c86da823de265fcc"} Jan 26 21:01:39 crc kubenswrapper[4899]: I0126 21:01:39.835820 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xkn2z"] Jan 26 21:01:39 crc kubenswrapper[4899]: I0126 21:01:39.837682 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xkn2z" Jan 26 21:01:39 crc kubenswrapper[4899]: I0126 21:01:39.843594 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 21:01:39 crc kubenswrapper[4899]: I0126 21:01:39.848871 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xkn2z"] Jan 26 21:01:39 crc kubenswrapper[4899]: I0126 21:01:39.911820 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01adb97d-6f07-4768-a883-fbcf0a1777ff-catalog-content\") pod \"redhat-operators-xkn2z\" (UID: \"01adb97d-6f07-4768-a883-fbcf0a1777ff\") " pod="openshift-marketplace/redhat-operators-xkn2z" Jan 26 21:01:39 crc kubenswrapper[4899]: I0126 21:01:39.911913 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlxqr\" (UniqueName: \"kubernetes.io/projected/01adb97d-6f07-4768-a883-fbcf0a1777ff-kube-api-access-dlxqr\") pod \"redhat-operators-xkn2z\" (UID: \"01adb97d-6f07-4768-a883-fbcf0a1777ff\") " pod="openshift-marketplace/redhat-operators-xkn2z" Jan 26 21:01:39 crc kubenswrapper[4899]: I0126 21:01:39.912084 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01adb97d-6f07-4768-a883-fbcf0a1777ff-utilities\") pod \"redhat-operators-xkn2z\" (UID: \"01adb97d-6f07-4768-a883-fbcf0a1777ff\") " pod="openshift-marketplace/redhat-operators-xkn2z" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.016341 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01adb97d-6f07-4768-a883-fbcf0a1777ff-catalog-content\") pod \"redhat-operators-xkn2z\" (UID: \"01adb97d-6f07-4768-a883-fbcf0a1777ff\") " pod="openshift-marketplace/redhat-operators-xkn2z" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.016399 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlxqr\" (UniqueName: \"kubernetes.io/projected/01adb97d-6f07-4768-a883-fbcf0a1777ff-kube-api-access-dlxqr\") pod \"redhat-operators-xkn2z\" (UID: \"01adb97d-6f07-4768-a883-fbcf0a1777ff\") " pod="openshift-marketplace/redhat-operators-xkn2z" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.016490 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01adb97d-6f07-4768-a883-fbcf0a1777ff-utilities\") pod \"redhat-operators-xkn2z\" (UID: \"01adb97d-6f07-4768-a883-fbcf0a1777ff\") " pod="openshift-marketplace/redhat-operators-xkn2z" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.017170 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01adb97d-6f07-4768-a883-fbcf0a1777ff-utilities\") pod \"redhat-operators-xkn2z\" (UID: \"01adb97d-6f07-4768-a883-fbcf0a1777ff\") " pod="openshift-marketplace/redhat-operators-xkn2z" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.019418 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01adb97d-6f07-4768-a883-fbcf0a1777ff-catalog-content\") pod \"redhat-operators-xkn2z\" (UID: \"01adb97d-6f07-4768-a883-fbcf0a1777ff\") " pod="openshift-marketplace/redhat-operators-xkn2z" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.025769 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6tzdt"] Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.027274 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6tzdt" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.032481 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.042557 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6tzdt"] Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.045818 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlxqr\" (UniqueName: \"kubernetes.io/projected/01adb97d-6f07-4768-a883-fbcf0a1777ff-kube-api-access-dlxqr\") pod \"redhat-operators-xkn2z\" (UID: \"01adb97d-6f07-4768-a883-fbcf0a1777ff\") " pod="openshift-marketplace/redhat-operators-xkn2z" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.119140 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st5fc\" (UniqueName: \"kubernetes.io/projected/3f877954-92f6-484c-a96e-388422e23f27-kube-api-access-st5fc\") pod \"certified-operators-6tzdt\" (UID: \"3f877954-92f6-484c-a96e-388422e23f27\") " pod="openshift-marketplace/certified-operators-6tzdt" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.120779 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f877954-92f6-484c-a96e-388422e23f27-catalog-content\") pod \"certified-operators-6tzdt\" (UID: \"3f877954-92f6-484c-a96e-388422e23f27\") " pod="openshift-marketplace/certified-operators-6tzdt" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.120862 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f877954-92f6-484c-a96e-388422e23f27-utilities\") pod \"certified-operators-6tzdt\" (UID: \"3f877954-92f6-484c-a96e-388422e23f27\") " pod="openshift-marketplace/certified-operators-6tzdt" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.170599 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xkn2z" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.223147 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st5fc\" (UniqueName: \"kubernetes.io/projected/3f877954-92f6-484c-a96e-388422e23f27-kube-api-access-st5fc\") pod \"certified-operators-6tzdt\" (UID: \"3f877954-92f6-484c-a96e-388422e23f27\") " pod="openshift-marketplace/certified-operators-6tzdt" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.223416 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f877954-92f6-484c-a96e-388422e23f27-catalog-content\") pod \"certified-operators-6tzdt\" (UID: \"3f877954-92f6-484c-a96e-388422e23f27\") " pod="openshift-marketplace/certified-operators-6tzdt" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.223477 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f877954-92f6-484c-a96e-388422e23f27-utilities\") pod \"certified-operators-6tzdt\" (UID: \"3f877954-92f6-484c-a96e-388422e23f27\") " pod="openshift-marketplace/certified-operators-6tzdt" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.224510 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f877954-92f6-484c-a96e-388422e23f27-catalog-content\") pod \"certified-operators-6tzdt\" (UID: \"3f877954-92f6-484c-a96e-388422e23f27\") " pod="openshift-marketplace/certified-operators-6tzdt" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.225723 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f877954-92f6-484c-a96e-388422e23f27-utilities\") pod \"certified-operators-6tzdt\" (UID: \"3f877954-92f6-484c-a96e-388422e23f27\") " pod="openshift-marketplace/certified-operators-6tzdt" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.249250 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st5fc\" (UniqueName: \"kubernetes.io/projected/3f877954-92f6-484c-a96e-388422e23f27-kube-api-access-st5fc\") pod \"certified-operators-6tzdt\" (UID: \"3f877954-92f6-484c-a96e-388422e23f27\") " pod="openshift-marketplace/certified-operators-6tzdt" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.294134 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9m2hr" event={"ID":"67ff9111-7a25-4b47-adb6-4e765311e6d9","Type":"ContainerStarted","Data":"d1a1e888857ebc1f54a0eff3715cd2679e7bbbb96ebe9e52548cf9a5a5d5344d"} Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.302329 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkhqw" event={"ID":"6215d320-2289-4e53-9c43-466c52516a43","Type":"ContainerStarted","Data":"3027f6383839bb8eb11f77745c4d56f1cd098b9064d4607d5358980a637e9a97"} Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.407479 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6tzdt" Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.636547 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xkn2z"] Jan 26 21:01:40 crc kubenswrapper[4899]: W0126 21:01:40.836298 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f877954_92f6_484c_a96e_388422e23f27.slice/crio-92aee06da8b00753f7bc2ee027700750ac44acd822f11c83c4cd75329b9cf2ff WatchSource:0}: Error finding container 92aee06da8b00753f7bc2ee027700750ac44acd822f11c83c4cd75329b9cf2ff: Status 404 returned error can't find the container with id 92aee06da8b00753f7bc2ee027700750ac44acd822f11c83c4cd75329b9cf2ff Jan 26 21:01:40 crc kubenswrapper[4899]: I0126 21:01:40.837660 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6tzdt"] Jan 26 21:01:41 crc kubenswrapper[4899]: I0126 21:01:41.309309 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tzdt" event={"ID":"3f877954-92f6-484c-a96e-388422e23f27","Type":"ContainerStarted","Data":"a91ee36d0784cff1ef5db88153503467737c813a4bf67fe072af80e0b0caf6d7"} Jan 26 21:01:41 crc kubenswrapper[4899]: I0126 21:01:41.309811 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tzdt" event={"ID":"3f877954-92f6-484c-a96e-388422e23f27","Type":"ContainerStarted","Data":"92aee06da8b00753f7bc2ee027700750ac44acd822f11c83c4cd75329b9cf2ff"} Jan 26 21:01:41 crc kubenswrapper[4899]: I0126 21:01:41.312249 4899 generic.go:334] "Generic (PLEG): container finished" podID="01adb97d-6f07-4768-a883-fbcf0a1777ff" containerID="3b2a6f3905ad54ec6cc8526f56f8f687862986b907ff509668aeedc41401647e" exitCode=0 Jan 26 21:01:41 crc kubenswrapper[4899]: I0126 21:01:41.312374 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkn2z" event={"ID":"01adb97d-6f07-4768-a883-fbcf0a1777ff","Type":"ContainerDied","Data":"3b2a6f3905ad54ec6cc8526f56f8f687862986b907ff509668aeedc41401647e"} Jan 26 21:01:41 crc kubenswrapper[4899]: I0126 21:01:41.312435 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkn2z" event={"ID":"01adb97d-6f07-4768-a883-fbcf0a1777ff","Type":"ContainerStarted","Data":"a05587066cbed49700c277ac03b580388bfa2d8d8dc6fb8f0304986f221d8292"} Jan 26 21:01:41 crc kubenswrapper[4899]: I0126 21:01:41.315054 4899 generic.go:334] "Generic (PLEG): container finished" podID="67ff9111-7a25-4b47-adb6-4e765311e6d9" containerID="d1a1e888857ebc1f54a0eff3715cd2679e7bbbb96ebe9e52548cf9a5a5d5344d" exitCode=0 Jan 26 21:01:41 crc kubenswrapper[4899]: I0126 21:01:41.315149 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9m2hr" event={"ID":"67ff9111-7a25-4b47-adb6-4e765311e6d9","Type":"ContainerDied","Data":"d1a1e888857ebc1f54a0eff3715cd2679e7bbbb96ebe9e52548cf9a5a5d5344d"} Jan 26 21:01:41 crc kubenswrapper[4899]: I0126 21:01:41.318871 4899 generic.go:334] "Generic (PLEG): container finished" podID="6215d320-2289-4e53-9c43-466c52516a43" containerID="3027f6383839bb8eb11f77745c4d56f1cd098b9064d4607d5358980a637e9a97" exitCode=0 Jan 26 21:01:41 crc kubenswrapper[4899]: I0126 21:01:41.318898 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkhqw" event={"ID":"6215d320-2289-4e53-9c43-466c52516a43","Type":"ContainerDied","Data":"3027f6383839bb8eb11f77745c4d56f1cd098b9064d4607d5358980a637e9a97"} Jan 26 21:01:42 crc kubenswrapper[4899]: I0126 21:01:42.308176 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75bb457f55-547vw"] Jan 26 21:01:42 crc kubenswrapper[4899]: I0126 21:01:42.308454 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" podUID="63affb67-6031-4706-91d7-ad08b9512482" containerName="controller-manager" containerID="cri-o://6aa2dc729901d2a68981343aa6e044df899eefaed4c5da41d92bbb4bc213c2e8" gracePeriod=30 Jan 26 21:01:42 crc kubenswrapper[4899]: I0126 21:01:42.327557 4899 generic.go:334] "Generic (PLEG): container finished" podID="3f877954-92f6-484c-a96e-388422e23f27" containerID="a91ee36d0784cff1ef5db88153503467737c813a4bf67fe072af80e0b0caf6d7" exitCode=0 Jan 26 21:01:42 crc kubenswrapper[4899]: I0126 21:01:42.327610 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tzdt" event={"ID":"3f877954-92f6-484c-a96e-388422e23f27","Type":"ContainerDied","Data":"a91ee36d0784cff1ef5db88153503467737c813a4bf67fe072af80e0b0caf6d7"} Jan 26 21:01:42 crc kubenswrapper[4899]: I0126 21:01:42.922914 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.071092 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-client-ca\") pod \"63affb67-6031-4706-91d7-ad08b9512482\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.071586 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-proxy-ca-bundles\") pod \"63affb67-6031-4706-91d7-ad08b9512482\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.071626 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63affb67-6031-4706-91d7-ad08b9512482-serving-cert\") pod \"63affb67-6031-4706-91d7-ad08b9512482\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.071701 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-config\") pod \"63affb67-6031-4706-91d7-ad08b9512482\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.071829 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n77g\" (UniqueName: \"kubernetes.io/projected/63affb67-6031-4706-91d7-ad08b9512482-kube-api-access-7n77g\") pod \"63affb67-6031-4706-91d7-ad08b9512482\" (UID: \"63affb67-6031-4706-91d7-ad08b9512482\") " Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.072441 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "63affb67-6031-4706-91d7-ad08b9512482" (UID: "63affb67-6031-4706-91d7-ad08b9512482"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.072490 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-config" (OuterVolumeSpecName: "config") pod "63affb67-6031-4706-91d7-ad08b9512482" (UID: "63affb67-6031-4706-91d7-ad08b9512482"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.072534 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-client-ca" (OuterVolumeSpecName: "client-ca") pod "63affb67-6031-4706-91d7-ad08b9512482" (UID: "63affb67-6031-4706-91d7-ad08b9512482"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.079046 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63affb67-6031-4706-91d7-ad08b9512482-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "63affb67-6031-4706-91d7-ad08b9512482" (UID: "63affb67-6031-4706-91d7-ad08b9512482"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.079190 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63affb67-6031-4706-91d7-ad08b9512482-kube-api-access-7n77g" (OuterVolumeSpecName: "kube-api-access-7n77g") pod "63affb67-6031-4706-91d7-ad08b9512482" (UID: "63affb67-6031-4706-91d7-ad08b9512482"). InnerVolumeSpecName "kube-api-access-7n77g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.173270 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n77g\" (UniqueName: \"kubernetes.io/projected/63affb67-6031-4706-91d7-ad08b9512482-kube-api-access-7n77g\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.173325 4899 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.173339 4899 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.173350 4899 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63affb67-6031-4706-91d7-ad08b9512482-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.173362 4899 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63affb67-6031-4706-91d7-ad08b9512482-config\") on node \"crc\" DevicePath \"\"" Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.335264 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkhqw" event={"ID":"6215d320-2289-4e53-9c43-466c52516a43","Type":"ContainerStarted","Data":"efc0dfe704ec089a27b290637df1f4c4b44fd43f55d2124a3f3a503bddf9f1c0"} Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.337530 4899 generic.go:334] "Generic (PLEG): container finished" podID="63affb67-6031-4706-91d7-ad08b9512482" containerID="6aa2dc729901d2a68981343aa6e044df899eefaed4c5da41d92bbb4bc213c2e8" exitCode=0 Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.337593 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" event={"ID":"63affb67-6031-4706-91d7-ad08b9512482","Type":"ContainerDied","Data":"6aa2dc729901d2a68981343aa6e044df899eefaed4c5da41d92bbb4bc213c2e8"} Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.337672 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" event={"ID":"63affb67-6031-4706-91d7-ad08b9512482","Type":"ContainerDied","Data":"5724bc25fccc4145d5341fbfab0923716712d6328cb3657fa91d678cc392615d"} Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.337694 4899 scope.go:117] "RemoveContainer" containerID="6aa2dc729901d2a68981343aa6e044df899eefaed4c5da41d92bbb4bc213c2e8" Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.337821 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75bb457f55-547vw" Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.343801 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tzdt" event={"ID":"3f877954-92f6-484c-a96e-388422e23f27","Type":"ContainerStarted","Data":"3b1eca4141be3f61d85c0cea65d503de846f8c82f0bc631571dff8c9b2e2f1b2"} Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.355295 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkn2z" event={"ID":"01adb97d-6f07-4768-a883-fbcf0a1777ff","Type":"ContainerStarted","Data":"8520e826eae8e5938b51b0adf1f8a8a7d9f4e3803ddc1c1f3009b6dcbb4b59ef"} Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.361209 4899 scope.go:117] "RemoveContainer" containerID="6aa2dc729901d2a68981343aa6e044df899eefaed4c5da41d92bbb4bc213c2e8" Jan 26 21:01:43 crc kubenswrapper[4899]: E0126 21:01:43.363126 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aa2dc729901d2a68981343aa6e044df899eefaed4c5da41d92bbb4bc213c2e8\": container with ID starting with 6aa2dc729901d2a68981343aa6e044df899eefaed4c5da41d92bbb4bc213c2e8 not found: ID does not exist" containerID="6aa2dc729901d2a68981343aa6e044df899eefaed4c5da41d92bbb4bc213c2e8" Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.363186 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aa2dc729901d2a68981343aa6e044df899eefaed4c5da41d92bbb4bc213c2e8"} err="failed to get container status \"6aa2dc729901d2a68981343aa6e044df899eefaed4c5da41d92bbb4bc213c2e8\": rpc error: code = NotFound desc = could not find container \"6aa2dc729901d2a68981343aa6e044df899eefaed4c5da41d92bbb4bc213c2e8\": container with ID starting with 6aa2dc729901d2a68981343aa6e044df899eefaed4c5da41d92bbb4bc213c2e8 not found: ID does not exist" Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.363960 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9m2hr" event={"ID":"67ff9111-7a25-4b47-adb6-4e765311e6d9","Type":"ContainerStarted","Data":"fed830d9fc9dec7fbc67bf92f52e02369f70ecddde2ba4f859f90df9fb31ce36"} Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.367176 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jkhqw" podStartSLOduration=2.911293972 podStartE2EDuration="6.367151064s" podCreationTimestamp="2026-01-26 21:01:37 +0000 UTC" firstStartedPulling="2026-01-26 21:01:39.285602062 +0000 UTC m=+388.667190109" lastFinishedPulling="2026-01-26 21:01:42.741459124 +0000 UTC m=+392.123047201" observedRunningTime="2026-01-26 21:01:43.361263814 +0000 UTC m=+392.742851861" watchObservedRunningTime="2026-01-26 21:01:43.367151064 +0000 UTC m=+392.748739101" Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.441275 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75bb457f55-547vw"] Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.444057 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-75bb457f55-547vw"] Jan 26 21:01:43 crc kubenswrapper[4899]: I0126 21:01:43.458174 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9m2hr" podStartSLOduration=3.020673111 podStartE2EDuration="6.458149133s" podCreationTimestamp="2026-01-26 21:01:37 +0000 UTC" firstStartedPulling="2026-01-26 21:01:39.28418623 +0000 UTC m=+388.665774267" lastFinishedPulling="2026-01-26 21:01:42.721662242 +0000 UTC m=+392.103250289" observedRunningTime="2026-01-26 21:01:43.455290111 +0000 UTC m=+392.836878158" watchObservedRunningTime="2026-01-26 21:01:43.458149133 +0000 UTC m=+392.839737170" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.119212 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-657bd675cc-gxq52"] Jan 26 21:01:44 crc kubenswrapper[4899]: E0126 21:01:44.120174 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63affb67-6031-4706-91d7-ad08b9512482" containerName="controller-manager" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.120208 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="63affb67-6031-4706-91d7-ad08b9512482" containerName="controller-manager" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.120411 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="63affb67-6031-4706-91d7-ad08b9512482" containerName="controller-manager" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.121606 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.125428 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.125646 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.126156 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.126371 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.130371 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.143623 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-657bd675cc-gxq52"] Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.144270 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.148292 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.185617 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390e2db8-dd2f-4e81-801b-15893c3bf247-config\") pod \"controller-manager-657bd675cc-gxq52\" (UID: \"390e2db8-dd2f-4e81-801b-15893c3bf247\") " pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.185689 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk2tc\" (UniqueName: \"kubernetes.io/projected/390e2db8-dd2f-4e81-801b-15893c3bf247-kube-api-access-mk2tc\") pod \"controller-manager-657bd675cc-gxq52\" (UID: \"390e2db8-dd2f-4e81-801b-15893c3bf247\") " pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.185732 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/390e2db8-dd2f-4e81-801b-15893c3bf247-serving-cert\") pod \"controller-manager-657bd675cc-gxq52\" (UID: \"390e2db8-dd2f-4e81-801b-15893c3bf247\") " pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.185755 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/390e2db8-dd2f-4e81-801b-15893c3bf247-proxy-ca-bundles\") pod \"controller-manager-657bd675cc-gxq52\" (UID: \"390e2db8-dd2f-4e81-801b-15893c3bf247\") " pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.185820 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/390e2db8-dd2f-4e81-801b-15893c3bf247-client-ca\") pod \"controller-manager-657bd675cc-gxq52\" (UID: \"390e2db8-dd2f-4e81-801b-15893c3bf247\") " pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.287738 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk2tc\" (UniqueName: \"kubernetes.io/projected/390e2db8-dd2f-4e81-801b-15893c3bf247-kube-api-access-mk2tc\") pod \"controller-manager-657bd675cc-gxq52\" (UID: \"390e2db8-dd2f-4e81-801b-15893c3bf247\") " pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.287838 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/390e2db8-dd2f-4e81-801b-15893c3bf247-serving-cert\") pod \"controller-manager-657bd675cc-gxq52\" (UID: \"390e2db8-dd2f-4e81-801b-15893c3bf247\") " pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.287879 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/390e2db8-dd2f-4e81-801b-15893c3bf247-proxy-ca-bundles\") pod \"controller-manager-657bd675cc-gxq52\" (UID: \"390e2db8-dd2f-4e81-801b-15893c3bf247\") " pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.288015 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/390e2db8-dd2f-4e81-801b-15893c3bf247-client-ca\") pod \"controller-manager-657bd675cc-gxq52\" (UID: \"390e2db8-dd2f-4e81-801b-15893c3bf247\") " pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.288070 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390e2db8-dd2f-4e81-801b-15893c3bf247-config\") pod \"controller-manager-657bd675cc-gxq52\" (UID: \"390e2db8-dd2f-4e81-801b-15893c3bf247\") " pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.290667 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390e2db8-dd2f-4e81-801b-15893c3bf247-config\") pod \"controller-manager-657bd675cc-gxq52\" (UID: \"390e2db8-dd2f-4e81-801b-15893c3bf247\") " pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.291472 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/390e2db8-dd2f-4e81-801b-15893c3bf247-client-ca\") pod \"controller-manager-657bd675cc-gxq52\" (UID: \"390e2db8-dd2f-4e81-801b-15893c3bf247\") " pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.292642 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/390e2db8-dd2f-4e81-801b-15893c3bf247-proxy-ca-bundles\") pod \"controller-manager-657bd675cc-gxq52\" (UID: \"390e2db8-dd2f-4e81-801b-15893c3bf247\") " pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.298076 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/390e2db8-dd2f-4e81-801b-15893c3bf247-serving-cert\") pod \"controller-manager-657bd675cc-gxq52\" (UID: \"390e2db8-dd2f-4e81-801b-15893c3bf247\") " pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.306237 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk2tc\" (UniqueName: \"kubernetes.io/projected/390e2db8-dd2f-4e81-801b-15893c3bf247-kube-api-access-mk2tc\") pod \"controller-manager-657bd675cc-gxq52\" (UID: \"390e2db8-dd2f-4e81-801b-15893c3bf247\") " pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.373048 4899 generic.go:334] "Generic (PLEG): container finished" podID="3f877954-92f6-484c-a96e-388422e23f27" containerID="3b1eca4141be3f61d85c0cea65d503de846f8c82f0bc631571dff8c9b2e2f1b2" exitCode=0 Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.373163 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tzdt" event={"ID":"3f877954-92f6-484c-a96e-388422e23f27","Type":"ContainerDied","Data":"3b1eca4141be3f61d85c0cea65d503de846f8c82f0bc631571dff8c9b2e2f1b2"} Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.375906 4899 generic.go:334] "Generic (PLEG): container finished" podID="01adb97d-6f07-4768-a883-fbcf0a1777ff" containerID="8520e826eae8e5938b51b0adf1f8a8a7d9f4e3803ddc1c1f3009b6dcbb4b59ef" exitCode=0 Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.376097 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkn2z" event={"ID":"01adb97d-6f07-4768-a883-fbcf0a1777ff","Type":"ContainerDied","Data":"8520e826eae8e5938b51b0adf1f8a8a7d9f4e3803ddc1c1f3009b6dcbb4b59ef"} Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.485284 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.706896 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-657bd675cc-gxq52"] Jan 26 21:01:44 crc kubenswrapper[4899]: I0126 21:01:44.953145 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63affb67-6031-4706-91d7-ad08b9512482" path="/var/lib/kubelet/pods/63affb67-6031-4706-91d7-ad08b9512482/volumes" Jan 26 21:01:45 crc kubenswrapper[4899]: I0126 21:01:45.386317 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" event={"ID":"390e2db8-dd2f-4e81-801b-15893c3bf247","Type":"ContainerStarted","Data":"fa9116a83cd2fcd01b22a11c42d5d5099efc10fd46b84885c504c1d57f6c7185"} Jan 26 21:01:45 crc kubenswrapper[4899]: I0126 21:01:45.386441 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" event={"ID":"390e2db8-dd2f-4e81-801b-15893c3bf247","Type":"ContainerStarted","Data":"dcb3610880f7d56ba782742459d232b372a61ed47d318f7d7e0c9f2832b48ff4"} Jan 26 21:01:45 crc kubenswrapper[4899]: I0126 21:01:45.388822 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:45 crc kubenswrapper[4899]: I0126 21:01:45.396369 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" Jan 26 21:01:45 crc kubenswrapper[4899]: I0126 21:01:45.423561 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-657bd675cc-gxq52" podStartSLOduration=3.423536552 podStartE2EDuration="3.423536552s" podCreationTimestamp="2026-01-26 21:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:01:45.416480323 +0000 UTC m=+394.798068360" watchObservedRunningTime="2026-01-26 21:01:45.423536552 +0000 UTC m=+394.805124589" Jan 26 21:01:46 crc kubenswrapper[4899]: I0126 21:01:46.395483 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tzdt" event={"ID":"3f877954-92f6-484c-a96e-388422e23f27","Type":"ContainerStarted","Data":"aa302486c219d019e90c53ba4e1a0a47340a262f185f18ac16eb570a22b30551"} Jan 26 21:01:46 crc kubenswrapper[4899]: I0126 21:01:46.399196 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkn2z" event={"ID":"01adb97d-6f07-4768-a883-fbcf0a1777ff","Type":"ContainerStarted","Data":"1c2dc1d8caed9df889475c04569ad9d8e7370260741fda8b3546c554375bf9a0"} Jan 26 21:01:46 crc kubenswrapper[4899]: I0126 21:01:46.416612 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6tzdt" podStartSLOduration=3.519184252 podStartE2EDuration="6.416577014s" podCreationTimestamp="2026-01-26 21:01:40 +0000 UTC" firstStartedPulling="2026-01-26 21:01:42.328906225 +0000 UTC m=+391.710494262" lastFinishedPulling="2026-01-26 21:01:45.226298987 +0000 UTC m=+394.607887024" observedRunningTime="2026-01-26 21:01:46.415835835 +0000 UTC m=+395.797423882" watchObservedRunningTime="2026-01-26 21:01:46.416577014 +0000 UTC m=+395.798165061" Jan 26 21:01:46 crc kubenswrapper[4899]: I0126 21:01:46.435136 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xkn2z" podStartSLOduration=3.5392941589999998 podStartE2EDuration="7.435116084s" podCreationTimestamp="2026-01-26 21:01:39 +0000 UTC" firstStartedPulling="2026-01-26 21:01:41.314051875 +0000 UTC m=+390.695639912" lastFinishedPulling="2026-01-26 21:01:45.2098738 +0000 UTC m=+394.591461837" observedRunningTime="2026-01-26 21:01:46.433725999 +0000 UTC m=+395.815314066" watchObservedRunningTime="2026-01-26 21:01:46.435116084 +0000 UTC m=+395.816704121" Jan 26 21:01:47 crc kubenswrapper[4899]: I0126 21:01:47.758010 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9m2hr" Jan 26 21:01:47 crc kubenswrapper[4899]: I0126 21:01:47.758761 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9m2hr" Jan 26 21:01:47 crc kubenswrapper[4899]: I0126 21:01:47.807015 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9m2hr" Jan 26 21:01:47 crc kubenswrapper[4899]: I0126 21:01:47.944217 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jkhqw" Jan 26 21:01:47 crc kubenswrapper[4899]: I0126 21:01:47.945069 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jkhqw" Jan 26 21:01:47 crc kubenswrapper[4899]: I0126 21:01:47.987731 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jkhqw" Jan 26 21:01:48 crc kubenswrapper[4899]: I0126 21:01:48.478410 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9m2hr" Jan 26 21:01:48 crc kubenswrapper[4899]: I0126 21:01:48.484821 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jkhqw" Jan 26 21:01:49 crc kubenswrapper[4899]: I0126 21:01:49.511562 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-v9kb5" Jan 26 21:01:49 crc kubenswrapper[4899]: I0126 21:01:49.571588 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vl6d2"] Jan 26 21:01:50 crc kubenswrapper[4899]: I0126 21:01:50.170753 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xkn2z" Jan 26 21:01:50 crc kubenswrapper[4899]: I0126 21:01:50.170801 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xkn2z" Jan 26 21:01:50 crc kubenswrapper[4899]: I0126 21:01:50.407989 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6tzdt" Jan 26 21:01:50 crc kubenswrapper[4899]: I0126 21:01:50.408055 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6tzdt" Jan 26 21:01:50 crc kubenswrapper[4899]: I0126 21:01:50.459986 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6tzdt" Jan 26 21:01:50 crc kubenswrapper[4899]: I0126 21:01:50.507296 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6tzdt" Jan 26 21:01:51 crc kubenswrapper[4899]: I0126 21:01:51.232031 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xkn2z" podUID="01adb97d-6f07-4768-a883-fbcf0a1777ff" containerName="registry-server" probeResult="failure" output=< Jan 26 21:01:51 crc kubenswrapper[4899]: timeout: failed to connect service ":50051" within 1s Jan 26 21:01:51 crc kubenswrapper[4899]: > Jan 26 21:01:57 crc kubenswrapper[4899]: I0126 21:01:57.519735 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" podUID="22530841-f07a-4811-bbdf-9964a1818e16" containerName="oauth-openshift" containerID="cri-o://b076f3b560ed603f5f9c18dc008ebf80ead880bfa18bfb7f20c6927e2eaa3659" gracePeriod=15 Jan 26 21:01:59 crc kubenswrapper[4899]: I0126 21:01:59.082127 4899 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rq8lx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" start-of-body= Jan 26 21:01:59 crc kubenswrapper[4899]: I0126 21:01:59.082217 4899 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" podUID="22530841-f07a-4811-bbdf-9964a1818e16" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" Jan 26 21:02:00 crc kubenswrapper[4899]: I0126 21:02:00.109298 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:02:00 crc kubenswrapper[4899]: I0126 21:02:00.110127 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:02:00 crc kubenswrapper[4899]: I0126 21:02:00.110235 4899 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 21:02:00 crc kubenswrapper[4899]: I0126 21:02:00.111355 4899 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f94e6baab8018d5397a8277f89202396b5fce9952d69ae12adb866883853800"} pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 21:02:00 crc kubenswrapper[4899]: I0126 21:02:00.111468 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" containerID="cri-o://3f94e6baab8018d5397a8277f89202396b5fce9952d69ae12adb866883853800" gracePeriod=600 Jan 26 21:02:00 crc kubenswrapper[4899]: I0126 21:02:00.248406 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xkn2z" Jan 26 21:02:00 crc kubenswrapper[4899]: I0126 21:02:00.317423 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xkn2z" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.372423 4899 generic.go:334] "Generic (PLEG): container finished" podID="22530841-f07a-4811-bbdf-9964a1818e16" containerID="b076f3b560ed603f5f9c18dc008ebf80ead880bfa18bfb7f20c6927e2eaa3659" exitCode=0 Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.372514 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" event={"ID":"22530841-f07a-4811-bbdf-9964a1818e16","Type":"ContainerDied","Data":"b076f3b560ed603f5f9c18dc008ebf80ead880bfa18bfb7f20c6927e2eaa3659"} Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.374207 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.407301 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-754dc54bdd-2prth"] Jan 26 21:02:03 crc kubenswrapper[4899]: E0126 21:02:03.407506 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22530841-f07a-4811-bbdf-9964a1818e16" containerName="oauth-openshift" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.407517 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="22530841-f07a-4811-bbdf-9964a1818e16" containerName="oauth-openshift" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.407629 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="22530841-f07a-4811-bbdf-9964a1818e16" containerName="oauth-openshift" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.407987 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.424875 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-754dc54bdd-2prth"] Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.494659 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-router-certs\") pod \"22530841-f07a-4811-bbdf-9964a1818e16\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.494707 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-session\") pod \"22530841-f07a-4811-bbdf-9964a1818e16\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.494732 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-service-ca\") pod \"22530841-f07a-4811-bbdf-9964a1818e16\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.494754 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-audit-policies\") pod \"22530841-f07a-4811-bbdf-9964a1818e16\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.494773 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-trusted-ca-bundle\") pod \"22530841-f07a-4811-bbdf-9964a1818e16\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495026 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22530841-f07a-4811-bbdf-9964a1818e16-audit-dir\") pod \"22530841-f07a-4811-bbdf-9964a1818e16\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495099 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-ocp-branding-template\") pod \"22530841-f07a-4811-bbdf-9964a1818e16\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495129 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-login\") pod \"22530841-f07a-4811-bbdf-9964a1818e16\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495158 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-serving-cert\") pod \"22530841-f07a-4811-bbdf-9964a1818e16\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495186 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-provider-selection\") pod \"22530841-f07a-4811-bbdf-9964a1818e16\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495223 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ll95\" (UniqueName: \"kubernetes.io/projected/22530841-f07a-4811-bbdf-9964a1818e16-kube-api-access-5ll95\") pod \"22530841-f07a-4811-bbdf-9964a1818e16\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495252 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-cliconfig\") pod \"22530841-f07a-4811-bbdf-9964a1818e16\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495257 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22530841-f07a-4811-bbdf-9964a1818e16-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "22530841-f07a-4811-bbdf-9964a1818e16" (UID: "22530841-f07a-4811-bbdf-9964a1818e16"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495278 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-idp-0-file-data\") pod \"22530841-f07a-4811-bbdf-9964a1818e16\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495337 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-error\") pod \"22530841-f07a-4811-bbdf-9964a1818e16\" (UID: \"22530841-f07a-4811-bbdf-9964a1818e16\") " Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495524 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "22530841-f07a-4811-bbdf-9964a1818e16" (UID: "22530841-f07a-4811-bbdf-9964a1818e16"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495544 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-user-template-error\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495573 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwsf9\" (UniqueName: \"kubernetes.io/projected/05d5e83a-01c6-4e68-b75a-b38174ac2edc-kube-api-access-gwsf9\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495572 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "22530841-f07a-4811-bbdf-9964a1818e16" (UID: "22530841-f07a-4811-bbdf-9964a1818e16"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495619 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "22530841-f07a-4811-bbdf-9964a1818e16" (UID: "22530841-f07a-4811-bbdf-9964a1818e16"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495902 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495906 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "22530841-f07a-4811-bbdf-9964a1818e16" (UID: "22530841-f07a-4811-bbdf-9964a1818e16"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.495998 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.496050 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/05d5e83a-01c6-4e68-b75a-b38174ac2edc-audit-dir\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.496094 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.496115 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-router-certs\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.496136 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.496151 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-user-template-login\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.496192 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.496212 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-session\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.496230 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-service-ca\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.496245 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/05d5e83a-01c6-4e68-b75a-b38174ac2edc-audit-policies\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.496276 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.496384 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.496395 4899 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.496405 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.496416 4899 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22530841-f07a-4811-bbdf-9964a1818e16-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.496426 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.510352 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "22530841-f07a-4811-bbdf-9964a1818e16" (UID: "22530841-f07a-4811-bbdf-9964a1818e16"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.516239 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "22530841-f07a-4811-bbdf-9964a1818e16" (UID: "22530841-f07a-4811-bbdf-9964a1818e16"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.516847 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22530841-f07a-4811-bbdf-9964a1818e16-kube-api-access-5ll95" (OuterVolumeSpecName: "kube-api-access-5ll95") pod "22530841-f07a-4811-bbdf-9964a1818e16" (UID: "22530841-f07a-4811-bbdf-9964a1818e16"). InnerVolumeSpecName "kube-api-access-5ll95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.517164 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "22530841-f07a-4811-bbdf-9964a1818e16" (UID: "22530841-f07a-4811-bbdf-9964a1818e16"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.517801 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "22530841-f07a-4811-bbdf-9964a1818e16" (UID: "22530841-f07a-4811-bbdf-9964a1818e16"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.518300 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "22530841-f07a-4811-bbdf-9964a1818e16" (UID: "22530841-f07a-4811-bbdf-9964a1818e16"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.520691 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "22530841-f07a-4811-bbdf-9964a1818e16" (UID: "22530841-f07a-4811-bbdf-9964a1818e16"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.524039 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "22530841-f07a-4811-bbdf-9964a1818e16" (UID: "22530841-f07a-4811-bbdf-9964a1818e16"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.524286 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "22530841-f07a-4811-bbdf-9964a1818e16" (UID: "22530841-f07a-4811-bbdf-9964a1818e16"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597133 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/05d5e83a-01c6-4e68-b75a-b38174ac2edc-audit-dir\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597204 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597247 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-router-certs\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597254 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/05d5e83a-01c6-4e68-b75a-b38174ac2edc-audit-dir\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597273 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597351 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-user-template-login\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597384 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597408 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-session\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597431 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-service-ca\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597454 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/05d5e83a-01c6-4e68-b75a-b38174ac2edc-audit-policies\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597484 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597545 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-user-template-error\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597565 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwsf9\" (UniqueName: \"kubernetes.io/projected/05d5e83a-01c6-4e68-b75a-b38174ac2edc-kube-api-access-gwsf9\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597660 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597689 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597751 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597764 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597775 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597784 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597794 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597804 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597813 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597824 4899 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/22530841-f07a-4811-bbdf-9964a1818e16-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.597834 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ll95\" (UniqueName: \"kubernetes.io/projected/22530841-f07a-4811-bbdf-9964a1818e16-kube-api-access-5ll95\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.598177 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.598287 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/05d5e83a-01c6-4e68-b75a-b38174ac2edc-audit-policies\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.598794 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-service-ca\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.599816 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.602152 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.603377 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.603438 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.603757 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-session\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.604191 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-router-certs\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.604476 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.604679 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-user-template-error\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.606379 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/05d5e83a-01c6-4e68-b75a-b38174ac2edc-v4-0-config-user-template-login\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.615725 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwsf9\" (UniqueName: \"kubernetes.io/projected/05d5e83a-01c6-4e68-b75a-b38174ac2edc-kube-api-access-gwsf9\") pod \"oauth-openshift-754dc54bdd-2prth\" (UID: \"05d5e83a-01c6-4e68-b75a-b38174ac2edc\") " pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:03 crc kubenswrapper[4899]: I0126 21:02:03.724499 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:04 crc kubenswrapper[4899]: I0126 21:02:04.125422 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-754dc54bdd-2prth"] Jan 26 21:02:04 crc kubenswrapper[4899]: I0126 21:02:04.381329 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" event={"ID":"22530841-f07a-4811-bbdf-9964a1818e16","Type":"ContainerDied","Data":"b92c6e27d8f4b907d730fe8c00268925ba56d63c5d52bb83e2765c697ff98114"} Jan 26 21:02:04 crc kubenswrapper[4899]: I0126 21:02:04.381905 4899 scope.go:117] "RemoveContainer" containerID="b076f3b560ed603f5f9c18dc008ebf80ead880bfa18bfb7f20c6927e2eaa3659" Jan 26 21:02:04 crc kubenswrapper[4899]: I0126 21:02:04.381427 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rq8lx" Jan 26 21:02:04 crc kubenswrapper[4899]: I0126 21:02:04.384790 4899 generic.go:334] "Generic (PLEG): container finished" podID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerID="3f94e6baab8018d5397a8277f89202396b5fce9952d69ae12adb866883853800" exitCode=0 Jan 26 21:02:04 crc kubenswrapper[4899]: I0126 21:02:04.384887 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerDied","Data":"3f94e6baab8018d5397a8277f89202396b5fce9952d69ae12adb866883853800"} Jan 26 21:02:04 crc kubenswrapper[4899]: I0126 21:02:04.384939 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerStarted","Data":"c018a7f69a4a011503be63a0439d6960fe854a979779cb714695f295f40f4476"} Jan 26 21:02:04 crc kubenswrapper[4899]: I0126 21:02:04.390559 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" event={"ID":"05d5e83a-01c6-4e68-b75a-b38174ac2edc","Type":"ContainerStarted","Data":"5171a56633cc094b5547bf9e6f8cee18f0f3bf65701a4850283498c3a6552afb"} Jan 26 21:02:04 crc kubenswrapper[4899]: I0126 21:02:04.404065 4899 scope.go:117] "RemoveContainer" containerID="16b18199fe65050438c43f75a34ce173357134333fcf0881fd32d7fc561416a4" Jan 26 21:02:04 crc kubenswrapper[4899]: I0126 21:02:04.427361 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rq8lx"] Jan 26 21:02:04 crc kubenswrapper[4899]: I0126 21:02:04.433533 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rq8lx"] Jan 26 21:02:04 crc kubenswrapper[4899]: I0126 21:02:04.938384 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22530841-f07a-4811-bbdf-9964a1818e16" path="/var/lib/kubelet/pods/22530841-f07a-4811-bbdf-9964a1818e16/volumes" Jan 26 21:02:05 crc kubenswrapper[4899]: I0126 21:02:05.401422 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" event={"ID":"05d5e83a-01c6-4e68-b75a-b38174ac2edc","Type":"ContainerStarted","Data":"f547d149174b18d8e0ef42180675a268e5e3141b74cd1f53ad133523ad64f7a5"} Jan 26 21:02:05 crc kubenswrapper[4899]: I0126 21:02:05.421904 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" podStartSLOduration=33.42188339 podStartE2EDuration="33.42188339s" podCreationTimestamp="2026-01-26 21:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:02:05.419069359 +0000 UTC m=+414.800657406" watchObservedRunningTime="2026-01-26 21:02:05.42188339 +0000 UTC m=+414.803471427" Jan 26 21:02:06 crc kubenswrapper[4899]: I0126 21:02:06.408983 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:06 crc kubenswrapper[4899]: I0126 21:02:06.414839 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-754dc54bdd-2prth" Jan 26 21:02:14 crc kubenswrapper[4899]: I0126 21:02:14.607040 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" podUID="75860fb2-d5e0-449b-bd63-6f27e4a82a85" containerName="registry" containerID="cri-o://84a30206f3116dc50af8f14cf392bf946fef494a1c9cc39af6e0add9cf1bc554" gracePeriod=30 Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.177776 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.286201 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75860fb2-d5e0-449b-bd63-6f27e4a82a85-trusted-ca\") pod \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.286539 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.286608 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75860fb2-d5e0-449b-bd63-6f27e4a82a85-ca-trust-extracted\") pod \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.286661 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75860fb2-d5e0-449b-bd63-6f27e4a82a85-registry-certificates\") pod \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.286765 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wljn2\" (UniqueName: \"kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-kube-api-access-wljn2\") pod \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.286814 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-registry-tls\") pod \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.287165 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75860fb2-d5e0-449b-bd63-6f27e4a82a85-installation-pull-secrets\") pod \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.288267 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75860fb2-d5e0-449b-bd63-6f27e4a82a85-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "75860fb2-d5e0-449b-bd63-6f27e4a82a85" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.288686 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-bound-sa-token\") pod \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\" (UID: \"75860fb2-d5e0-449b-bd63-6f27e4a82a85\") " Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.289182 4899 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75860fb2-d5e0-449b-bd63-6f27e4a82a85-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.289234 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75860fb2-d5e0-449b-bd63-6f27e4a82a85-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "75860fb2-d5e0-449b-bd63-6f27e4a82a85" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.296296 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-kube-api-access-wljn2" (OuterVolumeSpecName: "kube-api-access-wljn2") pod "75860fb2-d5e0-449b-bd63-6f27e4a82a85" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85"). InnerVolumeSpecName "kube-api-access-wljn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.296887 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75860fb2-d5e0-449b-bd63-6f27e4a82a85-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "75860fb2-d5e0-449b-bd63-6f27e4a82a85" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.297329 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "75860fb2-d5e0-449b-bd63-6f27e4a82a85" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.303869 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "75860fb2-d5e0-449b-bd63-6f27e4a82a85" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.306343 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "75860fb2-d5e0-449b-bd63-6f27e4a82a85" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.320062 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75860fb2-d5e0-449b-bd63-6f27e4a82a85-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "75860fb2-d5e0-449b-bd63-6f27e4a82a85" (UID: "75860fb2-d5e0-449b-bd63-6f27e4a82a85"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.391566 4899 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75860fb2-d5e0-449b-bd63-6f27e4a82a85-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.391699 4899 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.391719 4899 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75860fb2-d5e0-449b-bd63-6f27e4a82a85-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.391737 4899 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75860fb2-d5e0-449b-bd63-6f27e4a82a85-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.391756 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wljn2\" (UniqueName: \"kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-kube-api-access-wljn2\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.391777 4899 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75860fb2-d5e0-449b-bd63-6f27e4a82a85-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.483262 4899 generic.go:334] "Generic (PLEG): container finished" podID="75860fb2-d5e0-449b-bd63-6f27e4a82a85" containerID="84a30206f3116dc50af8f14cf392bf946fef494a1c9cc39af6e0add9cf1bc554" exitCode=0 Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.483352 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" event={"ID":"75860fb2-d5e0-449b-bd63-6f27e4a82a85","Type":"ContainerDied","Data":"84a30206f3116dc50af8f14cf392bf946fef494a1c9cc39af6e0add9cf1bc554"} Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.483409 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" event={"ID":"75860fb2-d5e0-449b-bd63-6f27e4a82a85","Type":"ContainerDied","Data":"c66837c76667c58a7f34d74c83a03ef57070cd618cc306030b9a14eeaabaf074"} Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.483442 4899 scope.go:117] "RemoveContainer" containerID="84a30206f3116dc50af8f14cf392bf946fef494a1c9cc39af6e0add9cf1bc554" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.484014 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vl6d2" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.515980 4899 scope.go:117] "RemoveContainer" containerID="84a30206f3116dc50af8f14cf392bf946fef494a1c9cc39af6e0add9cf1bc554" Jan 26 21:02:15 crc kubenswrapper[4899]: E0126 21:02:15.516825 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84a30206f3116dc50af8f14cf392bf946fef494a1c9cc39af6e0add9cf1bc554\": container with ID starting with 84a30206f3116dc50af8f14cf392bf946fef494a1c9cc39af6e0add9cf1bc554 not found: ID does not exist" containerID="84a30206f3116dc50af8f14cf392bf946fef494a1c9cc39af6e0add9cf1bc554" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.516869 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84a30206f3116dc50af8f14cf392bf946fef494a1c9cc39af6e0add9cf1bc554"} err="failed to get container status \"84a30206f3116dc50af8f14cf392bf946fef494a1c9cc39af6e0add9cf1bc554\": rpc error: code = NotFound desc = could not find container \"84a30206f3116dc50af8f14cf392bf946fef494a1c9cc39af6e0add9cf1bc554\": container with ID starting with 84a30206f3116dc50af8f14cf392bf946fef494a1c9cc39af6e0add9cf1bc554 not found: ID does not exist" Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.556350 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vl6d2"] Jan 26 21:02:15 crc kubenswrapper[4899]: I0126 21:02:15.564184 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vl6d2"] Jan 26 21:02:15 crc kubenswrapper[4899]: E0126 21:02:15.645887 4899 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75860fb2_d5e0_449b_bd63_6f27e4a82a85.slice/crio-c66837c76667c58a7f34d74c83a03ef57070cd618cc306030b9a14eeaabaf074\": RecentStats: unable to find data in memory cache]" Jan 26 21:02:16 crc kubenswrapper[4899]: I0126 21:02:16.944658 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75860fb2-d5e0-449b-bd63-6f27e4a82a85" path="/var/lib/kubelet/pods/75860fb2-d5e0-449b-bd63-6f27e4a82a85/volumes" Jan 26 21:04:30 crc kubenswrapper[4899]: I0126 21:04:30.110295 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:04:30 crc kubenswrapper[4899]: I0126 21:04:30.110898 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:05:00 crc kubenswrapper[4899]: I0126 21:05:00.109846 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:05:00 crc kubenswrapper[4899]: I0126 21:05:00.111739 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:05:23 crc kubenswrapper[4899]: I0126 21:05:23.866409 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mrvcx"] Jan 26 21:05:23 crc kubenswrapper[4899]: I0126 21:05:23.867843 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="northd" containerID="cri-o://4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4" gracePeriod=30 Jan 26 21:05:23 crc kubenswrapper[4899]: I0126 21:05:23.867882 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovn-acl-logging" containerID="cri-o://5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250" gracePeriod=30 Jan 26 21:05:23 crc kubenswrapper[4899]: I0126 21:05:23.867843 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="kube-rbac-proxy-node" containerID="cri-o://4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502" gracePeriod=30 Jan 26 21:05:23 crc kubenswrapper[4899]: I0126 21:05:23.867985 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovn-controller" containerID="cri-o://aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013" gracePeriod=30 Jan 26 21:05:23 crc kubenswrapper[4899]: I0126 21:05:23.867896 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="sbdb" containerID="cri-o://2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0" gracePeriod=30 Jan 26 21:05:23 crc kubenswrapper[4899]: I0126 21:05:23.868095 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="nbdb" containerID="cri-o://c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5" gracePeriod=30 Jan 26 21:05:23 crc kubenswrapper[4899]: I0126 21:05:23.868126 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af" gracePeriod=30 Jan 26 21:05:23 crc kubenswrapper[4899]: I0126 21:05:23.928644 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovnkube-controller" containerID="cri-o://cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417" gracePeriod=30 Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.219559 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovnkube-controller/3.log" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.228219 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovn-acl-logging/0.log" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.231893 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovn-controller/0.log" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.232470 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.310320 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xfcrp"] Jan 26 21:05:24 crc kubenswrapper[4899]: E0126 21:05:24.312361 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.312571 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 21:05:24 crc kubenswrapper[4899]: E0126 21:05:24.312881 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovnkube-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.313089 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovnkube-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: E0126 21:05:24.313272 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75860fb2-d5e0-449b-bd63-6f27e4a82a85" containerName="registry" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.313448 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="75860fb2-d5e0-449b-bd63-6f27e4a82a85" containerName="registry" Jan 26 21:05:24 crc kubenswrapper[4899]: E0126 21:05:24.313628 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovnkube-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.313762 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovnkube-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: E0126 21:05:24.313890 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="kube-rbac-proxy-node" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.314069 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="kube-rbac-proxy-node" Jan 26 21:05:24 crc kubenswrapper[4899]: E0126 21:05:24.314194 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="northd" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.314299 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="northd" Jan 26 21:05:24 crc kubenswrapper[4899]: E0126 21:05:24.314411 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovn-acl-logging" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.314592 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovn-acl-logging" Jan 26 21:05:24 crc kubenswrapper[4899]: E0126 21:05:24.315046 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovn-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.315261 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovn-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: E0126 21:05:24.315462 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovnkube-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.315633 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovnkube-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: E0126 21:05:24.315792 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="kubecfg-setup" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.315994 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="kubecfg-setup" Jan 26 21:05:24 crc kubenswrapper[4899]: E0126 21:05:24.316176 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="sbdb" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.316307 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="sbdb" Jan 26 21:05:24 crc kubenswrapper[4899]: E0126 21:05:24.316439 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="nbdb" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.316544 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="nbdb" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.316959 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovnkube-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.317108 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="sbdb" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.317396 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="northd" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.317526 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovnkube-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.317646 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovn-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.317766 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="75860fb2-d5e0-449b-bd63-6f27e4a82a85" containerName="registry" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.317887 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="kube-rbac-proxy-node" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.318043 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovnkube-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.318206 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovnkube-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.318366 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="nbdb" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.318532 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.318696 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovn-acl-logging" Jan 26 21:05:24 crc kubenswrapper[4899]: E0126 21:05:24.319053 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovnkube-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.319241 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovnkube-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: E0126 21:05:24.319389 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovnkube-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.319508 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovnkube-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.319830 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerName="ovnkube-controller" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.323293 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.396742 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-ovn\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.396825 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt664\" (UniqueName: \"kubernetes.io/projected/30d7d720-d73a-488d-b6ec-755f5da1888c-kube-api-access-pt664\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.396887 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-kubelet\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.396953 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30d7d720-d73a-488d-b6ec-755f5da1888c-ovn-node-metrics-cert\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.396962 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.396992 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-systemd-units\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.397079 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.397117 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-systemd\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.397200 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-cni-netd\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.397315 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-run-ovn-kubernetes\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.397097 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.397248 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.397403 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.397454 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.397488 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-slash\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.397549 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-log-socket\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.397571 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.397622 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-slash" (OuterVolumeSpecName: "host-slash") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.397656 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-ovnkube-config\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.397678 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-env-overrides\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.397727 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-log-socket" (OuterVolumeSpecName: "log-socket") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.398348 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.398420 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-openvswitch\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.398432 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.398469 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-ovnkube-script-lib\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.398495 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.398507 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-etc-openvswitch\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.398567 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-node-log\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.398600 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-run-netns\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.398634 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-cni-bin\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.398664 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-var-lib-openvswitch\") pod \"30d7d720-d73a-488d-b6ec-755f5da1888c\" (UID: \"30d7d720-d73a-488d-b6ec-755f5da1888c\") " Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399013 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-kubelet\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399047 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-run-openvswitch\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399115 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbnzj\" (UniqueName: \"kubernetes.io/projected/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-kube-api-access-nbnzj\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399151 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-cni-bin\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399152 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399178 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-ovnkube-config\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399210 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399228 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-env-overrides\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399256 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399271 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-slash\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399293 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-node-log" (OuterVolumeSpecName: "node-log") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399316 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399333 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399347 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-run-ovn\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399372 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399388 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-var-lib-openvswitch\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399552 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-ovnkube-script-lib\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399610 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-systemd-units\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399656 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-run-ovn-kubernetes\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399704 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-etc-openvswitch\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399776 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-cni-netd\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399899 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-run-netns\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.399990 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-log-socket\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400062 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-node-log\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400104 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-run-systemd\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400149 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-ovn-node-metrics-cert\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400300 4899 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400321 4899 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400341 4899 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400360 4899 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400382 4899 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400400 4899 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400419 4899 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400437 4899 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400456 4899 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400473 4899 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400491 4899 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/30d7d720-d73a-488d-b6ec-755f5da1888c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400511 4899 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400556 4899 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400577 4899 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400595 4899 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400612 4899 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.400631 4899 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.404443 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d7d720-d73a-488d-b6ec-755f5da1888c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.405037 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d7d720-d73a-488d-b6ec-755f5da1888c-kube-api-access-pt664" (OuterVolumeSpecName: "kube-api-access-pt664") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "kube-api-access-pt664". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.414765 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "30d7d720-d73a-488d-b6ec-755f5da1888c" (UID: "30d7d720-d73a-488d-b6ec-755f5da1888c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.502142 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-ovn-node-metrics-cert\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.502546 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-kubelet\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.502697 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-kubelet\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.502731 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-run-openvswitch\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.502950 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbnzj\" (UniqueName: \"kubernetes.io/projected/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-kube-api-access-nbnzj\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503001 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-cni-bin\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503035 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-ovnkube-config\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503132 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-env-overrides\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503187 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-slash\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503250 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503286 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-run-ovn\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503298 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-slash\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503248 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-cni-bin\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503332 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-var-lib-openvswitch\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503401 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503440 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-ovnkube-script-lib\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503477 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-systemd-units\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503416 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-run-ovn\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503375 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-var-lib-openvswitch\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503554 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-etc-openvswitch\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503517 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-etc-openvswitch\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503621 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-run-ovn-kubernetes\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503629 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-systemd-units\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503666 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-cni-netd\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503688 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-run-ovn-kubernetes\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503728 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-cni-netd\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503755 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-run-netns\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503782 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-env-overrides\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503815 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-log-socket\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503889 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-node-log\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503907 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-log-socket\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503965 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-node-log\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503887 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-host-run-netns\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.504010 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-run-systemd\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.503971 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-run-systemd\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.504117 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt664\" (UniqueName: \"kubernetes.io/projected/30d7d720-d73a-488d-b6ec-755f5da1888c-kube-api-access-pt664\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.504140 4899 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30d7d720-d73a-488d-b6ec-755f5da1888c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.504159 4899 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/30d7d720-d73a-488d-b6ec-755f5da1888c-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.504592 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-ovnkube-config\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.504760 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-ovnkube-script-lib\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.508326 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-run-openvswitch\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.509080 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-ovn-node-metrics-cert\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.524737 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbnzj\" (UniqueName: \"kubernetes.io/projected/ae6a213f-8e79-4807-ab39-ded42a3a8ab0-kube-api-access-nbnzj\") pod \"ovnkube-node-xfcrp\" (UID: \"ae6a213f-8e79-4807-ab39-ded42a3a8ab0\") " pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.639213 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.730236 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" event={"ID":"ae6a213f-8e79-4807-ab39-ded42a3a8ab0","Type":"ContainerStarted","Data":"d694a2bd0dc6346b4de29e7ab5bdfa213aed89025bb9cd209cfd78213e0e41c6"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.733715 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovnkube-controller/3.log" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.736794 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovn-acl-logging/0.log" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.737470 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrvcx_30d7d720-d73a-488d-b6ec-755f5da1888c/ovn-controller/0.log" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738163 4899 generic.go:334] "Generic (PLEG): container finished" podID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerID="cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417" exitCode=0 Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738270 4899 generic.go:334] "Generic (PLEG): container finished" podID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerID="2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0" exitCode=0 Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738291 4899 generic.go:334] "Generic (PLEG): container finished" podID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerID="c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5" exitCode=0 Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738306 4899 generic.go:334] "Generic (PLEG): container finished" podID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerID="4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4" exitCode=0 Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738326 4899 generic.go:334] "Generic (PLEG): container finished" podID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerID="2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af" exitCode=0 Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738341 4899 generic.go:334] "Generic (PLEG): container finished" podID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerID="4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502" exitCode=0 Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738358 4899 generic.go:334] "Generic (PLEG): container finished" podID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerID="5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250" exitCode=143 Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738373 4899 generic.go:334] "Generic (PLEG): container finished" podID="30d7d720-d73a-488d-b6ec-755f5da1888c" containerID="aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013" exitCode=143 Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738311 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738317 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerDied","Data":"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738525 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerDied","Data":"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738559 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerDied","Data":"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738590 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerDied","Data":"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738613 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerDied","Data":"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738636 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerDied","Data":"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738660 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738681 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738694 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738706 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738717 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738728 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738739 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738750 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738762 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738776 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerDied","Data":"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738792 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738805 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738817 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738828 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738841 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738854 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738865 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738876 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738888 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738898 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738913 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerDied","Data":"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738961 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738977 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.738989 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739001 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739012 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739024 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739035 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739048 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739058 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739070 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739086 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrvcx" event={"ID":"30d7d720-d73a-488d-b6ec-755f5da1888c","Type":"ContainerDied","Data":"8b4bf2edb0344a2c53f01be5769d4f9fcba711d745b363acf0c1e4748e28534b"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739104 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739117 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739129 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739141 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739153 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739151 4899 scope.go:117] "RemoveContainer" containerID="cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739165 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739282 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739295 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739307 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.739318 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.744893 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-24sf9_595ae596-1477-4438-94f7-69400dc1ba20/kube-multus/2.log" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.746074 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-24sf9_595ae596-1477-4438-94f7-69400dc1ba20/kube-multus/1.log" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.746130 4899 generic.go:334] "Generic (PLEG): container finished" podID="595ae596-1477-4438-94f7-69400dc1ba20" containerID="6c4d7f7a8e96fc84272e695b643dbe28e96ef9580bd73c64ac8ab76dd615e8cf" exitCode=2 Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.746179 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-24sf9" event={"ID":"595ae596-1477-4438-94f7-69400dc1ba20","Type":"ContainerDied","Data":"6c4d7f7a8e96fc84272e695b643dbe28e96ef9580bd73c64ac8ab76dd615e8cf"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.746223 4899 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a67d24e714a631cf429c1b815c09cd81ec904b03b09d1bcab788de52458822f5"} Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.747040 4899 scope.go:117] "RemoveContainer" containerID="6c4d7f7a8e96fc84272e695b643dbe28e96ef9580bd73c64ac8ab76dd615e8cf" Jan 26 21:05:24 crc kubenswrapper[4899]: E0126 21:05:24.747286 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-24sf9_openshift-multus(595ae596-1477-4438-94f7-69400dc1ba20)\"" pod="openshift-multus/multus-24sf9" podUID="595ae596-1477-4438-94f7-69400dc1ba20" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.773817 4899 scope.go:117] "RemoveContainer" containerID="d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.819782 4899 scope.go:117] "RemoveContainer" containerID="2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.820189 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mrvcx"] Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.826053 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mrvcx"] Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.845915 4899 scope.go:117] "RemoveContainer" containerID="c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.921027 4899 scope.go:117] "RemoveContainer" containerID="4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.939354 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d7d720-d73a-488d-b6ec-755f5da1888c" path="/var/lib/kubelet/pods/30d7d720-d73a-488d-b6ec-755f5da1888c/volumes" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.946569 4899 scope.go:117] "RemoveContainer" containerID="2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.964657 4899 scope.go:117] "RemoveContainer" containerID="4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502" Jan 26 21:05:24 crc kubenswrapper[4899]: I0126 21:05:24.981653 4899 scope.go:117] "RemoveContainer" containerID="5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:24.999592 4899 scope.go:117] "RemoveContainer" containerID="aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.039057 4899 scope.go:117] "RemoveContainer" containerID="284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.064502 4899 scope.go:117] "RemoveContainer" containerID="cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417" Jan 26 21:05:25 crc kubenswrapper[4899]: E0126 21:05:25.065966 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417\": container with ID starting with cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417 not found: ID does not exist" containerID="cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.066026 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417"} err="failed to get container status \"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417\": rpc error: code = NotFound desc = could not find container \"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417\": container with ID starting with cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.066061 4899 scope.go:117] "RemoveContainer" containerID="d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e" Jan 26 21:05:25 crc kubenswrapper[4899]: E0126 21:05:25.066328 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e\": container with ID starting with d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e not found: ID does not exist" containerID="d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.066377 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e"} err="failed to get container status \"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e\": rpc error: code = NotFound desc = could not find container \"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e\": container with ID starting with d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.066408 4899 scope.go:117] "RemoveContainer" containerID="2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0" Jan 26 21:05:25 crc kubenswrapper[4899]: E0126 21:05:25.066697 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\": container with ID starting with 2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0 not found: ID does not exist" containerID="2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.066733 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0"} err="failed to get container status \"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\": rpc error: code = NotFound desc = could not find container \"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\": container with ID starting with 2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.066755 4899 scope.go:117] "RemoveContainer" containerID="c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5" Jan 26 21:05:25 crc kubenswrapper[4899]: E0126 21:05:25.067021 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\": container with ID starting with c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5 not found: ID does not exist" containerID="c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.067066 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5"} err="failed to get container status \"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\": rpc error: code = NotFound desc = could not find container \"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\": container with ID starting with c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.067094 4899 scope.go:117] "RemoveContainer" containerID="4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4" Jan 26 21:05:25 crc kubenswrapper[4899]: E0126 21:05:25.067356 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\": container with ID starting with 4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4 not found: ID does not exist" containerID="4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.067394 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4"} err="failed to get container status \"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\": rpc error: code = NotFound desc = could not find container \"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\": container with ID starting with 4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.067421 4899 scope.go:117] "RemoveContainer" containerID="2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af" Jan 26 21:05:25 crc kubenswrapper[4899]: E0126 21:05:25.067761 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\": container with ID starting with 2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af not found: ID does not exist" containerID="2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.067802 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af"} err="failed to get container status \"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\": rpc error: code = NotFound desc = could not find container \"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\": container with ID starting with 2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.067829 4899 scope.go:117] "RemoveContainer" containerID="4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502" Jan 26 21:05:25 crc kubenswrapper[4899]: E0126 21:05:25.068104 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\": container with ID starting with 4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502 not found: ID does not exist" containerID="4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.068141 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502"} err="failed to get container status \"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\": rpc error: code = NotFound desc = could not find container \"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\": container with ID starting with 4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.068162 4899 scope.go:117] "RemoveContainer" containerID="5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250" Jan 26 21:05:25 crc kubenswrapper[4899]: E0126 21:05:25.068399 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\": container with ID starting with 5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250 not found: ID does not exist" containerID="5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.068440 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250"} err="failed to get container status \"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\": rpc error: code = NotFound desc = could not find container \"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\": container with ID starting with 5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.068468 4899 scope.go:117] "RemoveContainer" containerID="aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013" Jan 26 21:05:25 crc kubenswrapper[4899]: E0126 21:05:25.068761 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\": container with ID starting with aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013 not found: ID does not exist" containerID="aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.068801 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013"} err="failed to get container status \"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\": rpc error: code = NotFound desc = could not find container \"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\": container with ID starting with aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.068829 4899 scope.go:117] "RemoveContainer" containerID="284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a" Jan 26 21:05:25 crc kubenswrapper[4899]: E0126 21:05:25.069425 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\": container with ID starting with 284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a not found: ID does not exist" containerID="284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.069460 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a"} err="failed to get container status \"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\": rpc error: code = NotFound desc = could not find container \"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\": container with ID starting with 284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.069486 4899 scope.go:117] "RemoveContainer" containerID="cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.069914 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417"} err="failed to get container status \"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417\": rpc error: code = NotFound desc = could not find container \"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417\": container with ID starting with cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.070057 4899 scope.go:117] "RemoveContainer" containerID="d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.070370 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e"} err="failed to get container status \"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e\": rpc error: code = NotFound desc = could not find container \"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e\": container with ID starting with d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.070405 4899 scope.go:117] "RemoveContainer" containerID="2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.070661 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0"} err="failed to get container status \"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\": rpc error: code = NotFound desc = could not find container \"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\": container with ID starting with 2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.070709 4899 scope.go:117] "RemoveContainer" containerID="c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.071051 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5"} err="failed to get container status \"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\": rpc error: code = NotFound desc = could not find container \"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\": container with ID starting with c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.071082 4899 scope.go:117] "RemoveContainer" containerID="4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.071445 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4"} err="failed to get container status \"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\": rpc error: code = NotFound desc = could not find container \"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\": container with ID starting with 4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.071501 4899 scope.go:117] "RemoveContainer" containerID="2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.071796 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af"} err="failed to get container status \"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\": rpc error: code = NotFound desc = could not find container \"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\": container with ID starting with 2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.071827 4899 scope.go:117] "RemoveContainer" containerID="4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.072196 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502"} err="failed to get container status \"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\": rpc error: code = NotFound desc = could not find container \"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\": container with ID starting with 4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.072233 4899 scope.go:117] "RemoveContainer" containerID="5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.073003 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250"} err="failed to get container status \"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\": rpc error: code = NotFound desc = could not find container \"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\": container with ID starting with 5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.073044 4899 scope.go:117] "RemoveContainer" containerID="aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.073401 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013"} err="failed to get container status \"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\": rpc error: code = NotFound desc = could not find container \"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\": container with ID starting with aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.073440 4899 scope.go:117] "RemoveContainer" containerID="284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.073696 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a"} err="failed to get container status \"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\": rpc error: code = NotFound desc = could not find container \"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\": container with ID starting with 284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.073745 4899 scope.go:117] "RemoveContainer" containerID="cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.074199 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417"} err="failed to get container status \"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417\": rpc error: code = NotFound desc = could not find container \"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417\": container with ID starting with cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.074231 4899 scope.go:117] "RemoveContainer" containerID="d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.074717 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e"} err="failed to get container status \"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e\": rpc error: code = NotFound desc = could not find container \"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e\": container with ID starting with d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.074750 4899 scope.go:117] "RemoveContainer" containerID="2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.075202 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0"} err="failed to get container status \"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\": rpc error: code = NotFound desc = could not find container \"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\": container with ID starting with 2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.075236 4899 scope.go:117] "RemoveContainer" containerID="c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.075532 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5"} err="failed to get container status \"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\": rpc error: code = NotFound desc = could not find container \"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\": container with ID starting with c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.075571 4899 scope.go:117] "RemoveContainer" containerID="4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.075869 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4"} err="failed to get container status \"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\": rpc error: code = NotFound desc = could not find container \"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\": container with ID starting with 4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.075908 4899 scope.go:117] "RemoveContainer" containerID="2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.076354 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af"} err="failed to get container status \"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\": rpc error: code = NotFound desc = could not find container \"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\": container with ID starting with 2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.076414 4899 scope.go:117] "RemoveContainer" containerID="4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.076739 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502"} err="failed to get container status \"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\": rpc error: code = NotFound desc = could not find container \"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\": container with ID starting with 4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.076777 4899 scope.go:117] "RemoveContainer" containerID="5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.077088 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250"} err="failed to get container status \"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\": rpc error: code = NotFound desc = could not find container \"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\": container with ID starting with 5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.077122 4899 scope.go:117] "RemoveContainer" containerID="aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.077753 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013"} err="failed to get container status \"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\": rpc error: code = NotFound desc = could not find container \"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\": container with ID starting with aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.077774 4899 scope.go:117] "RemoveContainer" containerID="284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.078115 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a"} err="failed to get container status \"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\": rpc error: code = NotFound desc = could not find container \"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\": container with ID starting with 284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.078160 4899 scope.go:117] "RemoveContainer" containerID="cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.078471 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417"} err="failed to get container status \"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417\": rpc error: code = NotFound desc = could not find container \"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417\": container with ID starting with cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.078495 4899 scope.go:117] "RemoveContainer" containerID="d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.078704 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e"} err="failed to get container status \"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e\": rpc error: code = NotFound desc = could not find container \"d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e\": container with ID starting with d8b8fb9711a8d77831e4f1b702e3ba97a5bb688febbd03f3332ff3af9a9e147e not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.078747 4899 scope.go:117] "RemoveContainer" containerID="2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.079100 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0"} err="failed to get container status \"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\": rpc error: code = NotFound desc = could not find container \"2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0\": container with ID starting with 2abf52b617840700c2cf49fa5cf5d8a8d9c160f7abb9b5e4d7f9e15451d3a8a0 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.079127 4899 scope.go:117] "RemoveContainer" containerID="c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.079499 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5"} err="failed to get container status \"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\": rpc error: code = NotFound desc = could not find container \"c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5\": container with ID starting with c7556bb70f7716be3688bdbbef0e584821a70e40d35ad286a1dfbddd3ee154f5 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.079550 4899 scope.go:117] "RemoveContainer" containerID="4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.079909 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4"} err="failed to get container status \"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\": rpc error: code = NotFound desc = could not find container \"4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4\": container with ID starting with 4fe262f050ec82b47adef5da2ef2710fd2c561eec3ef3d8505f81178304536a4 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.079959 4899 scope.go:117] "RemoveContainer" containerID="2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.080607 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af"} err="failed to get container status \"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\": rpc error: code = NotFound desc = could not find container \"2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af\": container with ID starting with 2ef2694ba7db1e372aae7a568c40cdf34dd85f99a03031c9f3d51e6ee1a397af not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.080640 4899 scope.go:117] "RemoveContainer" containerID="4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.081040 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502"} err="failed to get container status \"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\": rpc error: code = NotFound desc = could not find container \"4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502\": container with ID starting with 4a0daec660e9c23cfd042e233dcd0762ce3ae468fe948b172467bbd71987c502 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.081118 4899 scope.go:117] "RemoveContainer" containerID="5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.081735 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250"} err="failed to get container status \"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\": rpc error: code = NotFound desc = could not find container \"5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250\": container with ID starting with 5bb6d06d3d6d6ff8b38183a5aa4e92e54c4df18b2214f529966ca70795ec6250 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.081776 4899 scope.go:117] "RemoveContainer" containerID="aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.082262 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013"} err="failed to get container status \"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\": rpc error: code = NotFound desc = could not find container \"aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013\": container with ID starting with aa1dc31fa522085515e4a69096ec73f5e1135608fb05d180534dd66fa7410013 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.082309 4899 scope.go:117] "RemoveContainer" containerID="284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.083040 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a"} err="failed to get container status \"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\": rpc error: code = NotFound desc = could not find container \"284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a\": container with ID starting with 284ad70c13080d566ebbab59de5cefb5a78e0d885aab6281002831296945203a not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.083088 4899 scope.go:117] "RemoveContainer" containerID="cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.083447 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417"} err="failed to get container status \"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417\": rpc error: code = NotFound desc = could not find container \"cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417\": container with ID starting with cd6841aa82a0e9686ceca774a0c98181c0944e39397084fafb9e35f9d83e3417 not found: ID does not exist" Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.761963 4899 generic.go:334] "Generic (PLEG): container finished" podID="ae6a213f-8e79-4807-ab39-ded42a3a8ab0" containerID="116cc7457f56671bb9c1a49979f9e3894229297df7167ac1ba0f8372f3941820" exitCode=0 Jan 26 21:05:25 crc kubenswrapper[4899]: I0126 21:05:25.762098 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" event={"ID":"ae6a213f-8e79-4807-ab39-ded42a3a8ab0","Type":"ContainerDied","Data":"116cc7457f56671bb9c1a49979f9e3894229297df7167ac1ba0f8372f3941820"} Jan 26 21:05:26 crc kubenswrapper[4899]: I0126 21:05:26.778155 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" event={"ID":"ae6a213f-8e79-4807-ab39-ded42a3a8ab0","Type":"ContainerStarted","Data":"c8e65a9e176e36a0129939c101ae2f71329f7627e598df504fc0663379d50e67"} Jan 26 21:05:26 crc kubenswrapper[4899]: I0126 21:05:26.778540 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" event={"ID":"ae6a213f-8e79-4807-ab39-ded42a3a8ab0","Type":"ContainerStarted","Data":"817d6b1af7d6d53d8dc1f58680e16b7cb8763071de854fc7ace00c53d8fd9b9a"} Jan 26 21:05:26 crc kubenswrapper[4899]: I0126 21:05:26.778551 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" event={"ID":"ae6a213f-8e79-4807-ab39-ded42a3a8ab0","Type":"ContainerStarted","Data":"3443c381559bb6cb7b0bdf0b6e6c9b0881fad0641a3ff1e04035c1e0cb9a695e"} Jan 26 21:05:26 crc kubenswrapper[4899]: I0126 21:05:26.778561 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" event={"ID":"ae6a213f-8e79-4807-ab39-ded42a3a8ab0","Type":"ContainerStarted","Data":"d4a9f597bc407a32c2bec807fcde9c4029146fb0a36000ce0ccadde5eac798c3"} Jan 26 21:05:26 crc kubenswrapper[4899]: I0126 21:05:26.778573 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" event={"ID":"ae6a213f-8e79-4807-ab39-ded42a3a8ab0","Type":"ContainerStarted","Data":"84ed2d41ebd769671463137480c491556fdd49401ee02009d00266bb93bffbc0"} Jan 26 21:05:26 crc kubenswrapper[4899]: I0126 21:05:26.778583 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" event={"ID":"ae6a213f-8e79-4807-ab39-ded42a3a8ab0","Type":"ContainerStarted","Data":"226f077c6d05a0a8ecf2d879d4f945e1ebafff18bbde723b6532f800966f97d3"} Jan 26 21:05:28 crc kubenswrapper[4899]: I0126 21:05:28.797818 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" event={"ID":"ae6a213f-8e79-4807-ab39-ded42a3a8ab0","Type":"ContainerStarted","Data":"6d2c466c295117197b81d15579a8dd39dd8bee4490738d191c0b5076ecee0dcc"} Jan 26 21:05:30 crc kubenswrapper[4899]: I0126 21:05:30.109218 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:05:30 crc kubenswrapper[4899]: I0126 21:05:30.110171 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:05:30 crc kubenswrapper[4899]: I0126 21:05:30.110282 4899 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 21:05:30 crc kubenswrapper[4899]: I0126 21:05:30.111066 4899 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c018a7f69a4a011503be63a0439d6960fe854a979779cb714695f295f40f4476"} pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 21:05:30 crc kubenswrapper[4899]: I0126 21:05:30.111149 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" containerID="cri-o://c018a7f69a4a011503be63a0439d6960fe854a979779cb714695f295f40f4476" gracePeriod=600 Jan 26 21:05:30 crc kubenswrapper[4899]: I0126 21:05:30.827951 4899 generic.go:334] "Generic (PLEG): container finished" podID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerID="c018a7f69a4a011503be63a0439d6960fe854a979779cb714695f295f40f4476" exitCode=0 Jan 26 21:05:30 crc kubenswrapper[4899]: I0126 21:05:30.828021 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerDied","Data":"c018a7f69a4a011503be63a0439d6960fe854a979779cb714695f295f40f4476"} Jan 26 21:05:30 crc kubenswrapper[4899]: I0126 21:05:30.828345 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerStarted","Data":"d398f4687edca03fbdacf22de0045af2ea5d8affbf070e2faa1f8131fff946bc"} Jan 26 21:05:30 crc kubenswrapper[4899]: I0126 21:05:30.828372 4899 scope.go:117] "RemoveContainer" containerID="3f94e6baab8018d5397a8277f89202396b5fce9952d69ae12adb866883853800" Jan 26 21:05:31 crc kubenswrapper[4899]: I0126 21:05:31.841873 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" event={"ID":"ae6a213f-8e79-4807-ab39-ded42a3a8ab0","Type":"ContainerStarted","Data":"bdfa3ef705dccd4e6a5fc63ddecbb3bcd28d1365d25af28bf44820a60bb8448b"} Jan 26 21:05:31 crc kubenswrapper[4899]: I0126 21:05:31.842291 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:31 crc kubenswrapper[4899]: I0126 21:05:31.842305 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:31 crc kubenswrapper[4899]: I0126 21:05:31.873190 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" podStartSLOduration=7.8731652709999995 podStartE2EDuration="7.873165271s" podCreationTimestamp="2026-01-26 21:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:05:31.871829983 +0000 UTC m=+621.253418020" watchObservedRunningTime="2026-01-26 21:05:31.873165271 +0000 UTC m=+621.254753308" Jan 26 21:05:31 crc kubenswrapper[4899]: I0126 21:05:31.905312 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:32 crc kubenswrapper[4899]: I0126 21:05:32.847644 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:32 crc kubenswrapper[4899]: I0126 21:05:32.880382 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:36 crc kubenswrapper[4899]: I0126 21:05:36.931162 4899 scope.go:117] "RemoveContainer" containerID="6c4d7f7a8e96fc84272e695b643dbe28e96ef9580bd73c64ac8ab76dd615e8cf" Jan 26 21:05:36 crc kubenswrapper[4899]: E0126 21:05:36.932532 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-24sf9_openshift-multus(595ae596-1477-4438-94f7-69400dc1ba20)\"" pod="openshift-multus/multus-24sf9" podUID="595ae596-1477-4438-94f7-69400dc1ba20" Jan 26 21:05:47 crc kubenswrapper[4899]: I0126 21:05:47.930980 4899 scope.go:117] "RemoveContainer" containerID="6c4d7f7a8e96fc84272e695b643dbe28e96ef9580bd73c64ac8ab76dd615e8cf" Jan 26 21:05:48 crc kubenswrapper[4899]: I0126 21:05:48.961953 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-24sf9_595ae596-1477-4438-94f7-69400dc1ba20/kube-multus/2.log" Jan 26 21:05:48 crc kubenswrapper[4899]: I0126 21:05:48.963338 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-24sf9_595ae596-1477-4438-94f7-69400dc1ba20/kube-multus/1.log" Jan 26 21:05:48 crc kubenswrapper[4899]: I0126 21:05:48.963401 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-24sf9" event={"ID":"595ae596-1477-4438-94f7-69400dc1ba20","Type":"ContainerStarted","Data":"6ffb8d2f47c56b58414bafee16668b08041237d11b6152b2e4caf98690ccd28b"} Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.256165 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9"] Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.257314 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.259691 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.266020 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9"] Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.387584 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlc7d\" (UniqueName: \"kubernetes.io/projected/91a627e5-d605-4e13-bec3-0bdfa43e0a72-kube-api-access-zlc7d\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9\" (UID: \"91a627e5-d605-4e13-bec3-0bdfa43e0a72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.387719 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91a627e5-d605-4e13-bec3-0bdfa43e0a72-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9\" (UID: \"91a627e5-d605-4e13-bec3-0bdfa43e0a72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.387874 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91a627e5-d605-4e13-bec3-0bdfa43e0a72-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9\" (UID: \"91a627e5-d605-4e13-bec3-0bdfa43e0a72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.489464 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlc7d\" (UniqueName: \"kubernetes.io/projected/91a627e5-d605-4e13-bec3-0bdfa43e0a72-kube-api-access-zlc7d\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9\" (UID: \"91a627e5-d605-4e13-bec3-0bdfa43e0a72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.489747 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91a627e5-d605-4e13-bec3-0bdfa43e0a72-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9\" (UID: \"91a627e5-d605-4e13-bec3-0bdfa43e0a72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.489851 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91a627e5-d605-4e13-bec3-0bdfa43e0a72-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9\" (UID: \"91a627e5-d605-4e13-bec3-0bdfa43e0a72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.490467 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91a627e5-d605-4e13-bec3-0bdfa43e0a72-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9\" (UID: \"91a627e5-d605-4e13-bec3-0bdfa43e0a72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.490479 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91a627e5-d605-4e13-bec3-0bdfa43e0a72-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9\" (UID: \"91a627e5-d605-4e13-bec3-0bdfa43e0a72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.512172 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlc7d\" (UniqueName: \"kubernetes.io/projected/91a627e5-d605-4e13-bec3-0bdfa43e0a72-kube-api-access-zlc7d\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9\" (UID: \"91a627e5-d605-4e13-bec3-0bdfa43e0a72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.571394 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.812275 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9"] Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.975375 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" event={"ID":"91a627e5-d605-4e13-bec3-0bdfa43e0a72","Type":"ContainerStarted","Data":"3db09df222abf6ea62cc62a306897902b329076bf39e8d198b065006ec0e960e"} Jan 26 21:05:50 crc kubenswrapper[4899]: I0126 21:05:50.975844 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" event={"ID":"91a627e5-d605-4e13-bec3-0bdfa43e0a72","Type":"ContainerStarted","Data":"85e45ef81310a24d7e2fe4268fd6fdae184a36ebdea6cd330a34933995a7ec97"} Jan 26 21:05:51 crc kubenswrapper[4899]: I0126 21:05:51.984407 4899 generic.go:334] "Generic (PLEG): container finished" podID="91a627e5-d605-4e13-bec3-0bdfa43e0a72" containerID="3db09df222abf6ea62cc62a306897902b329076bf39e8d198b065006ec0e960e" exitCode=0 Jan 26 21:05:51 crc kubenswrapper[4899]: I0126 21:05:51.984470 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" event={"ID":"91a627e5-d605-4e13-bec3-0bdfa43e0a72","Type":"ContainerDied","Data":"3db09df222abf6ea62cc62a306897902b329076bf39e8d198b065006ec0e960e"} Jan 26 21:05:51 crc kubenswrapper[4899]: I0126 21:05:51.987859 4899 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 21:05:54 crc kubenswrapper[4899]: I0126 21:05:54.000958 4899 generic.go:334] "Generic (PLEG): container finished" podID="91a627e5-d605-4e13-bec3-0bdfa43e0a72" containerID="2c0c524b87025f83bdc584b10b3f338fd989f782948563a9154724490d07f797" exitCode=0 Jan 26 21:05:54 crc kubenswrapper[4899]: I0126 21:05:54.001329 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" event={"ID":"91a627e5-d605-4e13-bec3-0bdfa43e0a72","Type":"ContainerDied","Data":"2c0c524b87025f83bdc584b10b3f338fd989f782948563a9154724490d07f797"} Jan 26 21:05:54 crc kubenswrapper[4899]: I0126 21:05:54.658893 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xfcrp" Jan 26 21:05:55 crc kubenswrapper[4899]: I0126 21:05:55.012584 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" event={"ID":"91a627e5-d605-4e13-bec3-0bdfa43e0a72","Type":"ContainerStarted","Data":"eb9b84d757dcc0e320ac32e4d99d5dfc402ceda4f669dad5847ba01431a570cc"} Jan 26 21:05:55 crc kubenswrapper[4899]: I0126 21:05:55.039760 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" podStartSLOduration=3.902610045 podStartE2EDuration="5.039733003s" podCreationTimestamp="2026-01-26 21:05:50 +0000 UTC" firstStartedPulling="2026-01-26 21:05:51.987573189 +0000 UTC m=+641.369161226" lastFinishedPulling="2026-01-26 21:05:53.124696147 +0000 UTC m=+642.506284184" observedRunningTime="2026-01-26 21:05:55.036079287 +0000 UTC m=+644.417667394" watchObservedRunningTime="2026-01-26 21:05:55.039733003 +0000 UTC m=+644.421321080" Jan 26 21:05:56 crc kubenswrapper[4899]: I0126 21:05:56.022123 4899 generic.go:334] "Generic (PLEG): container finished" podID="91a627e5-d605-4e13-bec3-0bdfa43e0a72" containerID="eb9b84d757dcc0e320ac32e4d99d5dfc402ceda4f669dad5847ba01431a570cc" exitCode=0 Jan 26 21:05:56 crc kubenswrapper[4899]: I0126 21:05:56.022176 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" event={"ID":"91a627e5-d605-4e13-bec3-0bdfa43e0a72","Type":"ContainerDied","Data":"eb9b84d757dcc0e320ac32e4d99d5dfc402ceda4f669dad5847ba01431a570cc"} Jan 26 21:05:57 crc kubenswrapper[4899]: I0126 21:05:57.257741 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" Jan 26 21:05:57 crc kubenswrapper[4899]: I0126 21:05:57.387163 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlc7d\" (UniqueName: \"kubernetes.io/projected/91a627e5-d605-4e13-bec3-0bdfa43e0a72-kube-api-access-zlc7d\") pod \"91a627e5-d605-4e13-bec3-0bdfa43e0a72\" (UID: \"91a627e5-d605-4e13-bec3-0bdfa43e0a72\") " Jan 26 21:05:57 crc kubenswrapper[4899]: I0126 21:05:57.387247 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91a627e5-d605-4e13-bec3-0bdfa43e0a72-util\") pod \"91a627e5-d605-4e13-bec3-0bdfa43e0a72\" (UID: \"91a627e5-d605-4e13-bec3-0bdfa43e0a72\") " Jan 26 21:05:57 crc kubenswrapper[4899]: I0126 21:05:57.387325 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91a627e5-d605-4e13-bec3-0bdfa43e0a72-bundle\") pod \"91a627e5-d605-4e13-bec3-0bdfa43e0a72\" (UID: \"91a627e5-d605-4e13-bec3-0bdfa43e0a72\") " Jan 26 21:05:57 crc kubenswrapper[4899]: I0126 21:05:57.388361 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a627e5-d605-4e13-bec3-0bdfa43e0a72-bundle" (OuterVolumeSpecName: "bundle") pod "91a627e5-d605-4e13-bec3-0bdfa43e0a72" (UID: "91a627e5-d605-4e13-bec3-0bdfa43e0a72"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:05:57 crc kubenswrapper[4899]: I0126 21:05:57.394422 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91a627e5-d605-4e13-bec3-0bdfa43e0a72-kube-api-access-zlc7d" (OuterVolumeSpecName: "kube-api-access-zlc7d") pod "91a627e5-d605-4e13-bec3-0bdfa43e0a72" (UID: "91a627e5-d605-4e13-bec3-0bdfa43e0a72"). InnerVolumeSpecName "kube-api-access-zlc7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:05:57 crc kubenswrapper[4899]: I0126 21:05:57.397569 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a627e5-d605-4e13-bec3-0bdfa43e0a72-util" (OuterVolumeSpecName: "util") pod "91a627e5-d605-4e13-bec3-0bdfa43e0a72" (UID: "91a627e5-d605-4e13-bec3-0bdfa43e0a72"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:05:57 crc kubenswrapper[4899]: I0126 21:05:57.488257 4899 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91a627e5-d605-4e13-bec3-0bdfa43e0a72-util\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:57 crc kubenswrapper[4899]: I0126 21:05:57.488286 4899 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91a627e5-d605-4e13-bec3-0bdfa43e0a72-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:57 crc kubenswrapper[4899]: I0126 21:05:57.488296 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlc7d\" (UniqueName: \"kubernetes.io/projected/91a627e5-d605-4e13-bec3-0bdfa43e0a72-kube-api-access-zlc7d\") on node \"crc\" DevicePath \"\"" Jan 26 21:05:58 crc kubenswrapper[4899]: I0126 21:05:58.034668 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" event={"ID":"91a627e5-d605-4e13-bec3-0bdfa43e0a72","Type":"ContainerDied","Data":"85e45ef81310a24d7e2fe4268fd6fdae184a36ebdea6cd330a34933995a7ec97"} Jan 26 21:05:58 crc kubenswrapper[4899]: I0126 21:05:58.034725 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85e45ef81310a24d7e2fe4268fd6fdae184a36ebdea6cd330a34933995a7ec97" Jan 26 21:05:58 crc kubenswrapper[4899]: I0126 21:05:58.034741 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9" Jan 26 21:06:07 crc kubenswrapper[4899]: I0126 21:06:07.854956 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p"] Jan 26 21:06:07 crc kubenswrapper[4899]: E0126 21:06:07.855894 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a627e5-d605-4e13-bec3-0bdfa43e0a72" containerName="extract" Jan 26 21:06:07 crc kubenswrapper[4899]: I0126 21:06:07.855910 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a627e5-d605-4e13-bec3-0bdfa43e0a72" containerName="extract" Jan 26 21:06:07 crc kubenswrapper[4899]: E0126 21:06:07.855955 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a627e5-d605-4e13-bec3-0bdfa43e0a72" containerName="pull" Jan 26 21:06:07 crc kubenswrapper[4899]: I0126 21:06:07.855964 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a627e5-d605-4e13-bec3-0bdfa43e0a72" containerName="pull" Jan 26 21:06:07 crc kubenswrapper[4899]: E0126 21:06:07.855979 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a627e5-d605-4e13-bec3-0bdfa43e0a72" containerName="util" Jan 26 21:06:07 crc kubenswrapper[4899]: I0126 21:06:07.855987 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a627e5-d605-4e13-bec3-0bdfa43e0a72" containerName="util" Jan 26 21:06:07 crc kubenswrapper[4899]: I0126 21:06:07.856118 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a627e5-d605-4e13-bec3-0bdfa43e0a72" containerName="extract" Jan 26 21:06:07 crc kubenswrapper[4899]: I0126 21:06:07.856614 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p" Jan 26 21:06:07 crc kubenswrapper[4899]: I0126 21:06:07.869644 4899 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 21:06:07 crc kubenswrapper[4899]: I0126 21:06:07.869676 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 21:06:07 crc kubenswrapper[4899]: I0126 21:06:07.869742 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 21:06:07 crc kubenswrapper[4899]: I0126 21:06:07.869828 4899 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 21:06:07 crc kubenswrapper[4899]: I0126 21:06:07.877342 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p"] Jan 26 21:06:07 crc kubenswrapper[4899]: I0126 21:06:07.886778 4899 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-f5hr4" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.033279 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/65a48fb2-a892-4d8e-96ba-7fee5747d2f3-apiservice-cert\") pod \"metallb-operator-controller-manager-78b88669b5-qgw6p\" (UID: \"65a48fb2-a892-4d8e-96ba-7fee5747d2f3\") " pod="metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.033328 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/65a48fb2-a892-4d8e-96ba-7fee5747d2f3-webhook-cert\") pod \"metallb-operator-controller-manager-78b88669b5-qgw6p\" (UID: \"65a48fb2-a892-4d8e-96ba-7fee5747d2f3\") " pod="metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.033357 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmjsz\" (UniqueName: \"kubernetes.io/projected/65a48fb2-a892-4d8e-96ba-7fee5747d2f3-kube-api-access-tmjsz\") pod \"metallb-operator-controller-manager-78b88669b5-qgw6p\" (UID: \"65a48fb2-a892-4d8e-96ba-7fee5747d2f3\") " pod="metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.081553 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5"] Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.083132 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.086650 4899 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-z9wx9" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.086695 4899 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.087461 4899 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.114069 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5"] Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.136817 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmjsz\" (UniqueName: \"kubernetes.io/projected/65a48fb2-a892-4d8e-96ba-7fee5747d2f3-kube-api-access-tmjsz\") pod \"metallb-operator-controller-manager-78b88669b5-qgw6p\" (UID: \"65a48fb2-a892-4d8e-96ba-7fee5747d2f3\") " pod="metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.137267 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/65a48fb2-a892-4d8e-96ba-7fee5747d2f3-apiservice-cert\") pod \"metallb-operator-controller-manager-78b88669b5-qgw6p\" (UID: \"65a48fb2-a892-4d8e-96ba-7fee5747d2f3\") " pod="metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.137323 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/65a48fb2-a892-4d8e-96ba-7fee5747d2f3-webhook-cert\") pod \"metallb-operator-controller-manager-78b88669b5-qgw6p\" (UID: \"65a48fb2-a892-4d8e-96ba-7fee5747d2f3\") " pod="metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.146244 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/65a48fb2-a892-4d8e-96ba-7fee5747d2f3-apiservice-cert\") pod \"metallb-operator-controller-manager-78b88669b5-qgw6p\" (UID: \"65a48fb2-a892-4d8e-96ba-7fee5747d2f3\") " pod="metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.166419 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmjsz\" (UniqueName: \"kubernetes.io/projected/65a48fb2-a892-4d8e-96ba-7fee5747d2f3-kube-api-access-tmjsz\") pod \"metallb-operator-controller-manager-78b88669b5-qgw6p\" (UID: \"65a48fb2-a892-4d8e-96ba-7fee5747d2f3\") " pod="metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.168731 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/65a48fb2-a892-4d8e-96ba-7fee5747d2f3-webhook-cert\") pod \"metallb-operator-controller-manager-78b88669b5-qgw6p\" (UID: \"65a48fb2-a892-4d8e-96ba-7fee5747d2f3\") " pod="metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.173086 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.238441 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec-webhook-cert\") pod \"metallb-operator-webhook-server-d9559955b-jj9n5\" (UID: \"7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec\") " pod="metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.238876 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-626fc\" (UniqueName: \"kubernetes.io/projected/7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec-kube-api-access-626fc\") pod \"metallb-operator-webhook-server-d9559955b-jj9n5\" (UID: \"7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec\") " pod="metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.238910 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec-apiservice-cert\") pod \"metallb-operator-webhook-server-d9559955b-jj9n5\" (UID: \"7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec\") " pod="metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.339767 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec-webhook-cert\") pod \"metallb-operator-webhook-server-d9559955b-jj9n5\" (UID: \"7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec\") " pod="metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.339864 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-626fc\" (UniqueName: \"kubernetes.io/projected/7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec-kube-api-access-626fc\") pod \"metallb-operator-webhook-server-d9559955b-jj9n5\" (UID: \"7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec\") " pod="metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.339894 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec-apiservice-cert\") pod \"metallb-operator-webhook-server-d9559955b-jj9n5\" (UID: \"7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec\") " pod="metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.344450 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec-webhook-cert\") pod \"metallb-operator-webhook-server-d9559955b-jj9n5\" (UID: \"7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec\") " pod="metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.347273 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec-apiservice-cert\") pod \"metallb-operator-webhook-server-d9559955b-jj9n5\" (UID: \"7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec\") " pod="metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.358990 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-626fc\" (UniqueName: \"kubernetes.io/projected/7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec-kube-api-access-626fc\") pod \"metallb-operator-webhook-server-d9559955b-jj9n5\" (UID: \"7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec\") " pod="metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.374720 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p"] Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.411370 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5" Jan 26 21:06:08 crc kubenswrapper[4899]: I0126 21:06:08.850208 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5"] Jan 26 21:06:08 crc kubenswrapper[4899]: W0126 21:06:08.854175 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c68eca2_a2e7_4a3c_b614_6e8104b2b0ec.slice/crio-06e5a6d9b19e378980cb643e59956536b6097fc68ba770ddc839dc7302f0ff19 WatchSource:0}: Error finding container 06e5a6d9b19e378980cb643e59956536b6097fc68ba770ddc839dc7302f0ff19: Status 404 returned error can't find the container with id 06e5a6d9b19e378980cb643e59956536b6097fc68ba770ddc839dc7302f0ff19 Jan 26 21:06:09 crc kubenswrapper[4899]: I0126 21:06:09.103301 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5" event={"ID":"7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec","Type":"ContainerStarted","Data":"06e5a6d9b19e378980cb643e59956536b6097fc68ba770ddc839dc7302f0ff19"} Jan 26 21:06:09 crc kubenswrapper[4899]: I0126 21:06:09.104280 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p" event={"ID":"65a48fb2-a892-4d8e-96ba-7fee5747d2f3","Type":"ContainerStarted","Data":"972369aa02d4b7772398609d1098e5c994df408312625b9e205507bdcf0b4a82"} Jan 26 21:06:11 crc kubenswrapper[4899]: I0126 21:06:11.248856 4899 scope.go:117] "RemoveContainer" containerID="a67d24e714a631cf429c1b815c09cd81ec904b03b09d1bcab788de52458822f5" Jan 26 21:06:16 crc kubenswrapper[4899]: I0126 21:06:16.155508 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p" event={"ID":"65a48fb2-a892-4d8e-96ba-7fee5747d2f3","Type":"ContainerStarted","Data":"9c75eb8c523ab7ca22155abe30b2670e72e218cf8638eedccad5f80b0c66eddc"} Jan 26 21:06:16 crc kubenswrapper[4899]: I0126 21:06:16.156203 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p" Jan 26 21:06:16 crc kubenswrapper[4899]: I0126 21:06:16.157372 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5" event={"ID":"7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec","Type":"ContainerStarted","Data":"451f899a0bbe2f0b16fff9aadb2b5214fc0334c7399f36fc60b4e93ad8e1cc79"} Jan 26 21:06:16 crc kubenswrapper[4899]: I0126 21:06:16.157555 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5" Jan 26 21:06:16 crc kubenswrapper[4899]: I0126 21:06:16.159662 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-24sf9_595ae596-1477-4438-94f7-69400dc1ba20/kube-multus/2.log" Jan 26 21:06:16 crc kubenswrapper[4899]: I0126 21:06:16.205130 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p" podStartSLOduration=1.668156947 podStartE2EDuration="9.205109097s" podCreationTimestamp="2026-01-26 21:06:07 +0000 UTC" firstStartedPulling="2026-01-26 21:06:08.389755242 +0000 UTC m=+657.771343279" lastFinishedPulling="2026-01-26 21:06:15.926707392 +0000 UTC m=+665.308295429" observedRunningTime="2026-01-26 21:06:16.182150026 +0000 UTC m=+665.563738063" watchObservedRunningTime="2026-01-26 21:06:16.205109097 +0000 UTC m=+665.586697134" Jan 26 21:06:28 crc kubenswrapper[4899]: I0126 21:06:28.416758 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5" Jan 26 21:06:28 crc kubenswrapper[4899]: I0126 21:06:28.438841 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-d9559955b-jj9n5" podStartSLOduration=13.35637725 podStartE2EDuration="20.438818673s" podCreationTimestamp="2026-01-26 21:06:08 +0000 UTC" firstStartedPulling="2026-01-26 21:06:08.857108858 +0000 UTC m=+658.238696895" lastFinishedPulling="2026-01-26 21:06:15.939550281 +0000 UTC m=+665.321138318" observedRunningTime="2026-01-26 21:06:16.205056276 +0000 UTC m=+665.586644343" watchObservedRunningTime="2026-01-26 21:06:28.438818673 +0000 UTC m=+677.820406710" Jan 26 21:06:48 crc kubenswrapper[4899]: I0126 21:06:48.175890 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-78b88669b5-qgw6p" Jan 26 21:06:48 crc kubenswrapper[4899]: I0126 21:06:48.891463 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz"] Jan 26 21:06:48 crc kubenswrapper[4899]: I0126 21:06:48.892814 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz" Jan 26 21:06:48 crc kubenswrapper[4899]: I0126 21:06:48.895127 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-t97hl"] Jan 26 21:06:48 crc kubenswrapper[4899]: I0126 21:06:48.897219 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:48 crc kubenswrapper[4899]: I0126 21:06:48.897852 4899 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 21:06:48 crc kubenswrapper[4899]: I0126 21:06:48.901848 4899 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-j9w5q" Jan 26 21:06:48 crc kubenswrapper[4899]: I0126 21:06:48.903579 4899 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 21:06:48 crc kubenswrapper[4899]: I0126 21:06:48.905174 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 21:06:48 crc kubenswrapper[4899]: I0126 21:06:48.916874 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz"] Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.021567 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-ql4jc"] Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.022617 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ql4jc" Jan 26 21:06:49 crc kubenswrapper[4899]: W0126 21:06:49.027776 4899 reflector.go:561] object-"metallb-system"/"metallb-memberlist": failed to list *v1.Secret: secrets "metallb-memberlist" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 26 21:06:49 crc kubenswrapper[4899]: W0126 21:06:49.027798 4899 reflector.go:561] object-"metallb-system"/"metallb-excludel2": failed to list *v1.ConfigMap: configmaps "metallb-excludel2" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 26 21:06:49 crc kubenswrapper[4899]: E0126 21:06:49.027819 4899 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-memberlist\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-memberlist\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 21:06:49 crc kubenswrapper[4899]: W0126 21:06:49.027837 4899 reflector.go:561] object-"metallb-system"/"speaker-certs-secret": failed to list *v1.Secret: secrets "speaker-certs-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 26 21:06:49 crc kubenswrapper[4899]: E0126 21:06:49.027895 4899 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"speaker-certs-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 21:06:49 crc kubenswrapper[4899]: E0126 21:06:49.027849 4899 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-excludel2\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"metallb-excludel2\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 21:06:49 crc kubenswrapper[4899]: W0126 21:06:49.028047 4899 reflector.go:561] object-"metallb-system"/"speaker-dockercfg-fl8rd": failed to list *v1.Secret: secrets "speaker-dockercfg-fl8rd" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 26 21:06:49 crc kubenswrapper[4899]: E0126 21:06:49.028141 4899 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-dockercfg-fl8rd\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"speaker-dockercfg-fl8rd\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.038042 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cqs9\" (UniqueName: \"kubernetes.io/projected/aa46d965-a136-4e45-bee6-e5a64dc763f5-kube-api-access-5cqs9\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.038085 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/aa46d965-a136-4e45-bee6-e5a64dc763f5-metrics\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.038335 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/aa46d965-a136-4e45-bee6-e5a64dc763f5-reloader\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.038466 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/aa46d965-a136-4e45-bee6-e5a64dc763f5-frr-startup\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.038517 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/aa46d965-a136-4e45-bee6-e5a64dc763f5-frr-sockets\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.038563 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c74cccf-4954-447b-90d6-438a41878caa-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5kknz\" (UID: \"2c74cccf-4954-447b-90d6-438a41878caa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.038585 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa46d965-a136-4e45-bee6-e5a64dc763f5-metrics-certs\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.038646 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/aa46d965-a136-4e45-bee6-e5a64dc763f5-frr-conf\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.038715 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlhv2\" (UniqueName: \"kubernetes.io/projected/2c74cccf-4954-447b-90d6-438a41878caa-kube-api-access-jlhv2\") pod \"frr-k8s-webhook-server-7df86c4f6c-5kknz\" (UID: \"2c74cccf-4954-447b-90d6-438a41878caa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.046337 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-k5x85"] Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.047182 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-k5x85" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.051359 4899 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.076632 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-k5x85"] Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.140021 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/aa46d965-a136-4e45-bee6-e5a64dc763f5-frr-startup\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.140077 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5aede76a-7f3b-4b2d-827f-5aae59a3a65f-metallb-excludel2\") pod \"speaker-ql4jc\" (UID: \"5aede76a-7f3b-4b2d-827f-5aae59a3a65f\") " pod="metallb-system/speaker-ql4jc" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.140105 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/aa46d965-a136-4e45-bee6-e5a64dc763f5-frr-sockets\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.140130 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgnln\" (UniqueName: \"kubernetes.io/projected/5aede76a-7f3b-4b2d-827f-5aae59a3a65f-kube-api-access-rgnln\") pod \"speaker-ql4jc\" (UID: \"5aede76a-7f3b-4b2d-827f-5aae59a3a65f\") " pod="metallb-system/speaker-ql4jc" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.140152 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c74cccf-4954-447b-90d6-438a41878caa-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5kknz\" (UID: \"2c74cccf-4954-447b-90d6-438a41878caa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.140350 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa46d965-a136-4e45-bee6-e5a64dc763f5-metrics-certs\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.140433 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/887bd990-cb6d-4f69-bcf2-cf642b2c165b-metrics-certs\") pod \"controller-6968d8fdc4-k5x85\" (UID: \"887bd990-cb6d-4f69-bcf2-cf642b2c165b\") " pod="metallb-system/controller-6968d8fdc4-k5x85" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.140514 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/aa46d965-a136-4e45-bee6-e5a64dc763f5-frr-conf\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: E0126 21:06:49.140532 4899 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.140548 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlhv2\" (UniqueName: \"kubernetes.io/projected/2c74cccf-4954-447b-90d6-438a41878caa-kube-api-access-jlhv2\") pod \"frr-k8s-webhook-server-7df86c4f6c-5kknz\" (UID: \"2c74cccf-4954-447b-90d6-438a41878caa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz" Jan 26 21:06:49 crc kubenswrapper[4899]: E0126 21:06:49.140616 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c74cccf-4954-447b-90d6-438a41878caa-cert podName:2c74cccf-4954-447b-90d6-438a41878caa nodeName:}" failed. No retries permitted until 2026-01-26 21:06:49.640595935 +0000 UTC m=+699.022183972 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2c74cccf-4954-447b-90d6-438a41878caa-cert") pod "frr-k8s-webhook-server-7df86c4f6c-5kknz" (UID: "2c74cccf-4954-447b-90d6-438a41878caa") : secret "frr-k8s-webhook-server-cert" not found Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.140658 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5aede76a-7f3b-4b2d-827f-5aae59a3a65f-memberlist\") pod \"speaker-ql4jc\" (UID: \"5aede76a-7f3b-4b2d-827f-5aae59a3a65f\") " pod="metallb-system/speaker-ql4jc" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.140843 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cqs9\" (UniqueName: \"kubernetes.io/projected/aa46d965-a136-4e45-bee6-e5a64dc763f5-kube-api-access-5cqs9\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.140895 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5aede76a-7f3b-4b2d-827f-5aae59a3a65f-metrics-certs\") pod \"speaker-ql4jc\" (UID: \"5aede76a-7f3b-4b2d-827f-5aae59a3a65f\") " pod="metallb-system/speaker-ql4jc" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.140943 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/aa46d965-a136-4e45-bee6-e5a64dc763f5-metrics\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.141011 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/aa46d965-a136-4e45-bee6-e5a64dc763f5-reloader\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.141050 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/887bd990-cb6d-4f69-bcf2-cf642b2c165b-cert\") pod \"controller-6968d8fdc4-k5x85\" (UID: \"887bd990-cb6d-4f69-bcf2-cf642b2c165b\") " pod="metallb-system/controller-6968d8fdc4-k5x85" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.141079 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xgt4\" (UniqueName: \"kubernetes.io/projected/887bd990-cb6d-4f69-bcf2-cf642b2c165b-kube-api-access-2xgt4\") pod \"controller-6968d8fdc4-k5x85\" (UID: \"887bd990-cb6d-4f69-bcf2-cf642b2c165b\") " pod="metallb-system/controller-6968d8fdc4-k5x85" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.141471 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/aa46d965-a136-4e45-bee6-e5a64dc763f5-frr-sockets\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.141542 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/aa46d965-a136-4e45-bee6-e5a64dc763f5-reloader\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.141695 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/aa46d965-a136-4e45-bee6-e5a64dc763f5-frr-conf\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.142422 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/aa46d965-a136-4e45-bee6-e5a64dc763f5-frr-startup\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.143024 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/aa46d965-a136-4e45-bee6-e5a64dc763f5-metrics\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.150384 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa46d965-a136-4e45-bee6-e5a64dc763f5-metrics-certs\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.160651 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cqs9\" (UniqueName: \"kubernetes.io/projected/aa46d965-a136-4e45-bee6-e5a64dc763f5-kube-api-access-5cqs9\") pod \"frr-k8s-t97hl\" (UID: \"aa46d965-a136-4e45-bee6-e5a64dc763f5\") " pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.162492 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlhv2\" (UniqueName: \"kubernetes.io/projected/2c74cccf-4954-447b-90d6-438a41878caa-kube-api-access-jlhv2\") pod \"frr-k8s-webhook-server-7df86c4f6c-5kknz\" (UID: \"2c74cccf-4954-447b-90d6-438a41878caa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.235266 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t97hl" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.241919 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5aede76a-7f3b-4b2d-827f-5aae59a3a65f-metallb-excludel2\") pod \"speaker-ql4jc\" (UID: \"5aede76a-7f3b-4b2d-827f-5aae59a3a65f\") " pod="metallb-system/speaker-ql4jc" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.242025 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgnln\" (UniqueName: \"kubernetes.io/projected/5aede76a-7f3b-4b2d-827f-5aae59a3a65f-kube-api-access-rgnln\") pod \"speaker-ql4jc\" (UID: \"5aede76a-7f3b-4b2d-827f-5aae59a3a65f\") " pod="metallb-system/speaker-ql4jc" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.242073 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/887bd990-cb6d-4f69-bcf2-cf642b2c165b-metrics-certs\") pod \"controller-6968d8fdc4-k5x85\" (UID: \"887bd990-cb6d-4f69-bcf2-cf642b2c165b\") " pod="metallb-system/controller-6968d8fdc4-k5x85" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.242115 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5aede76a-7f3b-4b2d-827f-5aae59a3a65f-memberlist\") pod \"speaker-ql4jc\" (UID: \"5aede76a-7f3b-4b2d-827f-5aae59a3a65f\") " pod="metallb-system/speaker-ql4jc" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.242157 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5aede76a-7f3b-4b2d-827f-5aae59a3a65f-metrics-certs\") pod \"speaker-ql4jc\" (UID: \"5aede76a-7f3b-4b2d-827f-5aae59a3a65f\") " pod="metallb-system/speaker-ql4jc" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.242205 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/887bd990-cb6d-4f69-bcf2-cf642b2c165b-cert\") pod \"controller-6968d8fdc4-k5x85\" (UID: \"887bd990-cb6d-4f69-bcf2-cf642b2c165b\") " pod="metallb-system/controller-6968d8fdc4-k5x85" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.242233 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xgt4\" (UniqueName: \"kubernetes.io/projected/887bd990-cb6d-4f69-bcf2-cf642b2c165b-kube-api-access-2xgt4\") pod \"controller-6968d8fdc4-k5x85\" (UID: \"887bd990-cb6d-4f69-bcf2-cf642b2c165b\") " pod="metallb-system/controller-6968d8fdc4-k5x85" Jan 26 21:06:49 crc kubenswrapper[4899]: E0126 21:06:49.242917 4899 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 26 21:06:49 crc kubenswrapper[4899]: E0126 21:06:49.242998 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/887bd990-cb6d-4f69-bcf2-cf642b2c165b-metrics-certs podName:887bd990-cb6d-4f69-bcf2-cf642b2c165b nodeName:}" failed. No retries permitted until 2026-01-26 21:06:49.742979203 +0000 UTC m=+699.124567240 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/887bd990-cb6d-4f69-bcf2-cf642b2c165b-metrics-certs") pod "controller-6968d8fdc4-k5x85" (UID: "887bd990-cb6d-4f69-bcf2-cf642b2c165b") : secret "controller-certs-secret" not found Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.246372 4899 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.261554 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xgt4\" (UniqueName: \"kubernetes.io/projected/887bd990-cb6d-4f69-bcf2-cf642b2c165b-kube-api-access-2xgt4\") pod \"controller-6968d8fdc4-k5x85\" (UID: \"887bd990-cb6d-4f69-bcf2-cf642b2c165b\") " pod="metallb-system/controller-6968d8fdc4-k5x85" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.262114 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgnln\" (UniqueName: \"kubernetes.io/projected/5aede76a-7f3b-4b2d-827f-5aae59a3a65f-kube-api-access-rgnln\") pod \"speaker-ql4jc\" (UID: \"5aede76a-7f3b-4b2d-827f-5aae59a3a65f\") " pod="metallb-system/speaker-ql4jc" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.263487 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/887bd990-cb6d-4f69-bcf2-cf642b2c165b-cert\") pod \"controller-6968d8fdc4-k5x85\" (UID: \"887bd990-cb6d-4f69-bcf2-cf642b2c165b\") " pod="metallb-system/controller-6968d8fdc4-k5x85" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.647005 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c74cccf-4954-447b-90d6-438a41878caa-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5kknz\" (UID: \"2c74cccf-4954-447b-90d6-438a41878caa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.651833 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c74cccf-4954-447b-90d6-438a41878caa-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5kknz\" (UID: \"2c74cccf-4954-447b-90d6-438a41878caa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.748693 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/887bd990-cb6d-4f69-bcf2-cf642b2c165b-metrics-certs\") pod \"controller-6968d8fdc4-k5x85\" (UID: \"887bd990-cb6d-4f69-bcf2-cf642b2c165b\") " pod="metallb-system/controller-6968d8fdc4-k5x85" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.751582 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/887bd990-cb6d-4f69-bcf2-cf642b2c165b-metrics-certs\") pod \"controller-6968d8fdc4-k5x85\" (UID: \"887bd990-cb6d-4f69-bcf2-cf642b2c165b\") " pod="metallb-system/controller-6968d8fdc4-k5x85" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.825758 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz" Jan 26 21:06:49 crc kubenswrapper[4899]: I0126 21:06:49.961166 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-k5x85" Jan 26 21:06:50 crc kubenswrapper[4899]: I0126 21:06:50.006657 4899 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 21:06:50 crc kubenswrapper[4899]: I0126 21:06:50.029138 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 21:06:50 crc kubenswrapper[4899]: I0126 21:06:50.034240 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5aede76a-7f3b-4b2d-827f-5aae59a3a65f-metallb-excludel2\") pod \"speaker-ql4jc\" (UID: \"5aede76a-7f3b-4b2d-827f-5aae59a3a65f\") " pod="metallb-system/speaker-ql4jc" Jan 26 21:06:50 crc kubenswrapper[4899]: I0126 21:06:50.036754 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5aede76a-7f3b-4b2d-827f-5aae59a3a65f-metrics-certs\") pod \"speaker-ql4jc\" (UID: \"5aede76a-7f3b-4b2d-827f-5aae59a3a65f\") " pod="metallb-system/speaker-ql4jc" Jan 26 21:06:50 crc kubenswrapper[4899]: I0126 21:06:50.063482 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz"] Jan 26 21:06:50 crc kubenswrapper[4899]: I0126 21:06:50.078595 4899 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-fl8rd" Jan 26 21:06:50 crc kubenswrapper[4899]: I0126 21:06:50.222965 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-k5x85"] Jan 26 21:06:50 crc kubenswrapper[4899]: W0126 21:06:50.228866 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod887bd990_cb6d_4f69_bcf2_cf642b2c165b.slice/crio-45f9c8c3c9d12f95fd5c79dc32d3c9db23f8ec8e907a97b36df64e28b6d05c92 WatchSource:0}: Error finding container 45f9c8c3c9d12f95fd5c79dc32d3c9db23f8ec8e907a97b36df64e28b6d05c92: Status 404 returned error can't find the container with id 45f9c8c3c9d12f95fd5c79dc32d3c9db23f8ec8e907a97b36df64e28b6d05c92 Jan 26 21:06:50 crc kubenswrapper[4899]: E0126 21:06:50.243290 4899 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: failed to sync secret cache: timed out waiting for the condition Jan 26 21:06:50 crc kubenswrapper[4899]: E0126 21:06:50.243410 4899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aede76a-7f3b-4b2d-827f-5aae59a3a65f-memberlist podName:5aede76a-7f3b-4b2d-827f-5aae59a3a65f nodeName:}" failed. No retries permitted until 2026-01-26 21:06:50.743382246 +0000 UTC m=+700.124970303 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5aede76a-7f3b-4b2d-827f-5aae59a3a65f-memberlist") pod "speaker-ql4jc" (UID: "5aede76a-7f3b-4b2d-827f-5aae59a3a65f") : failed to sync secret cache: timed out waiting for the condition Jan 26 21:06:50 crc kubenswrapper[4899]: I0126 21:06:50.327533 4899 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 21:06:50 crc kubenswrapper[4899]: I0126 21:06:50.357803 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz" event={"ID":"2c74cccf-4954-447b-90d6-438a41878caa","Type":"ContainerStarted","Data":"c8a1fe168b54debc8a8beb9f2f85a6cf17692e2c019a9cb69e051fe6fb0c879a"} Jan 26 21:06:50 crc kubenswrapper[4899]: I0126 21:06:50.358740 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t97hl" event={"ID":"aa46d965-a136-4e45-bee6-e5a64dc763f5","Type":"ContainerStarted","Data":"a333dac2c74042510ae0fdaf062b7eb3f7971ef7db05a3b1d49413c067198b2d"} Jan 26 21:06:50 crc kubenswrapper[4899]: I0126 21:06:50.359794 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-k5x85" event={"ID":"887bd990-cb6d-4f69-bcf2-cf642b2c165b","Type":"ContainerStarted","Data":"45f9c8c3c9d12f95fd5c79dc32d3c9db23f8ec8e907a97b36df64e28b6d05c92"} Jan 26 21:06:50 crc kubenswrapper[4899]: I0126 21:06:50.762725 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5aede76a-7f3b-4b2d-827f-5aae59a3a65f-memberlist\") pod \"speaker-ql4jc\" (UID: \"5aede76a-7f3b-4b2d-827f-5aae59a3a65f\") " pod="metallb-system/speaker-ql4jc" Jan 26 21:06:50 crc kubenswrapper[4899]: I0126 21:06:50.774257 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5aede76a-7f3b-4b2d-827f-5aae59a3a65f-memberlist\") pod \"speaker-ql4jc\" (UID: \"5aede76a-7f3b-4b2d-827f-5aae59a3a65f\") " pod="metallb-system/speaker-ql4jc" Jan 26 21:06:50 crc kubenswrapper[4899]: I0126 21:06:50.840501 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ql4jc" Jan 26 21:06:51 crc kubenswrapper[4899]: I0126 21:06:51.365775 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ql4jc" event={"ID":"5aede76a-7f3b-4b2d-827f-5aae59a3a65f","Type":"ContainerStarted","Data":"cbbf112205102f8153422d63852743268b3d4cdc873b8e4cc9e3d443d86618f8"} Jan 26 21:06:51 crc kubenswrapper[4899]: I0126 21:06:51.366132 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ql4jc" event={"ID":"5aede76a-7f3b-4b2d-827f-5aae59a3a65f","Type":"ContainerStarted","Data":"0c4d8691218312614e0d7477286ad9cc721cd68260e6347bcadb7060d26c08e9"} Jan 26 21:06:51 crc kubenswrapper[4899]: I0126 21:06:51.367420 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-k5x85" event={"ID":"887bd990-cb6d-4f69-bcf2-cf642b2c165b","Type":"ContainerStarted","Data":"a2ac0a9bc40135d7f33240dd7c2702e198ac2851d57abc239238abc56191545b"} Jan 26 21:06:55 crc kubenswrapper[4899]: I0126 21:06:55.412247 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ql4jc" event={"ID":"5aede76a-7f3b-4b2d-827f-5aae59a3a65f","Type":"ContainerStarted","Data":"6080a6c08880768c47ab79c579adacf0a3ba74cf55553ad1fe0237ae79d6b55f"} Jan 26 21:06:55 crc kubenswrapper[4899]: I0126 21:06:55.413739 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-ql4jc" Jan 26 21:06:55 crc kubenswrapper[4899]: I0126 21:06:55.421792 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-k5x85" event={"ID":"887bd990-cb6d-4f69-bcf2-cf642b2c165b","Type":"ContainerStarted","Data":"c6386a09c3440c2e2832f71c229c9a41d907e9677e95933e743ad300106acfa2"} Jan 26 21:06:55 crc kubenswrapper[4899]: I0126 21:06:55.421962 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-k5x85" Jan 26 21:06:55 crc kubenswrapper[4899]: I0126 21:06:55.433613 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-ql4jc" podStartSLOduration=3.5033219730000003 podStartE2EDuration="6.433595659s" podCreationTimestamp="2026-01-26 21:06:49 +0000 UTC" firstStartedPulling="2026-01-26 21:06:51.278053755 +0000 UTC m=+700.659641802" lastFinishedPulling="2026-01-26 21:06:54.208327451 +0000 UTC m=+703.589915488" observedRunningTime="2026-01-26 21:06:55.433233238 +0000 UTC m=+704.814821445" watchObservedRunningTime="2026-01-26 21:06:55.433595659 +0000 UTC m=+704.815183696" Jan 26 21:06:55 crc kubenswrapper[4899]: I0126 21:06:55.460946 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-k5x85" podStartSLOduration=2.718371342 podStartE2EDuration="6.460900105s" podCreationTimestamp="2026-01-26 21:06:49 +0000 UTC" firstStartedPulling="2026-01-26 21:06:50.430463421 +0000 UTC m=+699.812051458" lastFinishedPulling="2026-01-26 21:06:54.172992184 +0000 UTC m=+703.554580221" observedRunningTime="2026-01-26 21:06:55.453323857 +0000 UTC m=+704.834911904" watchObservedRunningTime="2026-01-26 21:06:55.460900105 +0000 UTC m=+704.842488142" Jan 26 21:07:00 crc kubenswrapper[4899]: I0126 21:07:00.457021 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz" event={"ID":"2c74cccf-4954-447b-90d6-438a41878caa","Type":"ContainerStarted","Data":"7a313962cd67548ffa005b62587eb154c8f01dab5afba68b6c3e6b49ffd5115f"} Jan 26 21:07:00 crc kubenswrapper[4899]: I0126 21:07:00.457673 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz" Jan 26 21:07:00 crc kubenswrapper[4899]: I0126 21:07:00.459181 4899 generic.go:334] "Generic (PLEG): container finished" podID="aa46d965-a136-4e45-bee6-e5a64dc763f5" containerID="cf614a6313a446dd54eedb0eb772b8a4f672abd6c069cff80a8517ba4038377b" exitCode=0 Jan 26 21:07:00 crc kubenswrapper[4899]: I0126 21:07:00.459220 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t97hl" event={"ID":"aa46d965-a136-4e45-bee6-e5a64dc763f5","Type":"ContainerDied","Data":"cf614a6313a446dd54eedb0eb772b8a4f672abd6c069cff80a8517ba4038377b"} Jan 26 21:07:00 crc kubenswrapper[4899]: I0126 21:07:00.479566 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz" podStartSLOduration=2.54323192 podStartE2EDuration="12.479543719s" podCreationTimestamp="2026-01-26 21:06:48 +0000 UTC" firstStartedPulling="2026-01-26 21:06:50.074750471 +0000 UTC m=+699.456338498" lastFinishedPulling="2026-01-26 21:07:00.01106226 +0000 UTC m=+709.392650297" observedRunningTime="2026-01-26 21:07:00.47576137 +0000 UTC m=+709.857349447" watchObservedRunningTime="2026-01-26 21:07:00.479543719 +0000 UTC m=+709.861131776" Jan 26 21:07:01 crc kubenswrapper[4899]: I0126 21:07:01.468092 4899 generic.go:334] "Generic (PLEG): container finished" podID="aa46d965-a136-4e45-bee6-e5a64dc763f5" containerID="c73e8cf674ca66039db0d82554d46b4a16e24c1250af6df1f7242de839e3d6b4" exitCode=0 Jan 26 21:07:01 crc kubenswrapper[4899]: I0126 21:07:01.468190 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t97hl" event={"ID":"aa46d965-a136-4e45-bee6-e5a64dc763f5","Type":"ContainerDied","Data":"c73e8cf674ca66039db0d82554d46b4a16e24c1250af6df1f7242de839e3d6b4"} Jan 26 21:07:02 crc kubenswrapper[4899]: I0126 21:07:02.476360 4899 generic.go:334] "Generic (PLEG): container finished" podID="aa46d965-a136-4e45-bee6-e5a64dc763f5" containerID="b32dd427e4b10a3bee12fd0b2bb962f1eb2e680f99324eda77123143f91fb828" exitCode=0 Jan 26 21:07:02 crc kubenswrapper[4899]: I0126 21:07:02.476431 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t97hl" event={"ID":"aa46d965-a136-4e45-bee6-e5a64dc763f5","Type":"ContainerDied","Data":"b32dd427e4b10a3bee12fd0b2bb962f1eb2e680f99324eda77123143f91fb828"} Jan 26 21:07:03 crc kubenswrapper[4899]: I0126 21:07:03.485128 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t97hl" event={"ID":"aa46d965-a136-4e45-bee6-e5a64dc763f5","Type":"ContainerStarted","Data":"25aa290327b9045a1792ad070ad70755ca154cd9fe2a9e8d8717a57091d95025"} Jan 26 21:07:03 crc kubenswrapper[4899]: I0126 21:07:03.485171 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t97hl" event={"ID":"aa46d965-a136-4e45-bee6-e5a64dc763f5","Type":"ContainerStarted","Data":"5fd5963523a19a2185f9d9e6a8e138aac9cceb9517988a4e3809e1c5c13df76a"} Jan 26 21:07:03 crc kubenswrapper[4899]: I0126 21:07:03.485179 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t97hl" event={"ID":"aa46d965-a136-4e45-bee6-e5a64dc763f5","Type":"ContainerStarted","Data":"bcb1a50e5a18baaf4e5ec609a944c1a869c85e62672ae8d081413c029562a60c"} Jan 26 21:07:03 crc kubenswrapper[4899]: I0126 21:07:03.485188 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t97hl" event={"ID":"aa46d965-a136-4e45-bee6-e5a64dc763f5","Type":"ContainerStarted","Data":"d438689df20e014c25931df4b0361eb2ea8d522a56ea007320c8ee80f5f687f9"} Jan 26 21:07:03 crc kubenswrapper[4899]: I0126 21:07:03.485195 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t97hl" event={"ID":"aa46d965-a136-4e45-bee6-e5a64dc763f5","Type":"ContainerStarted","Data":"a9cd8b1bcb14d6a48399dcea05f55bbb6520c77282fd151e6e50d0cbdf283d21"} Jan 26 21:07:03 crc kubenswrapper[4899]: I0126 21:07:03.485203 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t97hl" event={"ID":"aa46d965-a136-4e45-bee6-e5a64dc763f5","Type":"ContainerStarted","Data":"ca927241026bc57b4788c5eed42dc869b881fb81247668f0025edf83be36b314"} Jan 26 21:07:03 crc kubenswrapper[4899]: I0126 21:07:03.485308 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-t97hl" Jan 26 21:07:03 crc kubenswrapper[4899]: I0126 21:07:03.506094 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-t97hl" podStartSLOduration=5.703823867 podStartE2EDuration="15.506076707s" podCreationTimestamp="2026-01-26 21:06:48 +0000 UTC" firstStartedPulling="2026-01-26 21:06:50.23901048 +0000 UTC m=+699.620598537" lastFinishedPulling="2026-01-26 21:07:00.04126334 +0000 UTC m=+709.422851377" observedRunningTime="2026-01-26 21:07:03.501780743 +0000 UTC m=+712.883368780" watchObservedRunningTime="2026-01-26 21:07:03.506076707 +0000 UTC m=+712.887664734" Jan 26 21:07:04 crc kubenswrapper[4899]: I0126 21:07:04.235782 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-t97hl" Jan 26 21:07:04 crc kubenswrapper[4899]: I0126 21:07:04.275439 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-t97hl" Jan 26 21:07:09 crc kubenswrapper[4899]: I0126 21:07:09.968983 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-k5x85" Jan 26 21:07:10 crc kubenswrapper[4899]: I0126 21:07:10.845910 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-ql4jc" Jan 26 21:07:19 crc kubenswrapper[4899]: I0126 21:07:19.239938 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-t97hl" Jan 26 21:07:19 crc kubenswrapper[4899]: I0126 21:07:19.833175 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5kknz" Jan 26 21:07:20 crc kubenswrapper[4899]: I0126 21:07:20.404763 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-pr622"] Jan 26 21:07:20 crc kubenswrapper[4899]: I0126 21:07:20.406389 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-pr622" Jan 26 21:07:20 crc kubenswrapper[4899]: I0126 21:07:20.408174 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-bwtm7" Jan 26 21:07:20 crc kubenswrapper[4899]: I0126 21:07:20.409333 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 21:07:20 crc kubenswrapper[4899]: I0126 21:07:20.409581 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 21:07:20 crc kubenswrapper[4899]: I0126 21:07:20.426082 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-pr622"] Jan 26 21:07:20 crc kubenswrapper[4899]: I0126 21:07:20.496291 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s65g2\" (UniqueName: \"kubernetes.io/projected/43a9bc55-7e28-4ef8-93b6-72913f3fc865-kube-api-access-s65g2\") pod \"mariadb-operator-index-pr622\" (UID: \"43a9bc55-7e28-4ef8-93b6-72913f3fc865\") " pod="openstack-operators/mariadb-operator-index-pr622" Jan 26 21:07:20 crc kubenswrapper[4899]: I0126 21:07:20.597811 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s65g2\" (UniqueName: \"kubernetes.io/projected/43a9bc55-7e28-4ef8-93b6-72913f3fc865-kube-api-access-s65g2\") pod \"mariadb-operator-index-pr622\" (UID: \"43a9bc55-7e28-4ef8-93b6-72913f3fc865\") " pod="openstack-operators/mariadb-operator-index-pr622" Jan 26 21:07:20 crc kubenswrapper[4899]: I0126 21:07:20.622713 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s65g2\" (UniqueName: \"kubernetes.io/projected/43a9bc55-7e28-4ef8-93b6-72913f3fc865-kube-api-access-s65g2\") pod \"mariadb-operator-index-pr622\" (UID: \"43a9bc55-7e28-4ef8-93b6-72913f3fc865\") " pod="openstack-operators/mariadb-operator-index-pr622" Jan 26 21:07:20 crc kubenswrapper[4899]: I0126 21:07:20.725989 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-pr622" Jan 26 21:07:21 crc kubenswrapper[4899]: I0126 21:07:21.105917 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-pr622"] Jan 26 21:07:21 crc kubenswrapper[4899]: I0126 21:07:21.600645 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-pr622" event={"ID":"43a9bc55-7e28-4ef8-93b6-72913f3fc865","Type":"ContainerStarted","Data":"93f282301105191d1fd90e961ca3a72d53a119267c5ee55cdcbf8ad3a9c86623"} Jan 26 21:07:24 crc kubenswrapper[4899]: I0126 21:07:24.620308 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-pr622" event={"ID":"43a9bc55-7e28-4ef8-93b6-72913f3fc865","Type":"ContainerStarted","Data":"409e488b912a36a0ac89ab7e245e927209a3f83253ba20619a5de22954ac3ee3"} Jan 26 21:07:24 crc kubenswrapper[4899]: I0126 21:07:24.646752 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-pr622" podStartSLOduration=1.918825612 podStartE2EDuration="4.646723991s" podCreationTimestamp="2026-01-26 21:07:20 +0000 UTC" firstStartedPulling="2026-01-26 21:07:21.113940146 +0000 UTC m=+730.495528183" lastFinishedPulling="2026-01-26 21:07:23.841838505 +0000 UTC m=+733.223426562" observedRunningTime="2026-01-26 21:07:24.638116463 +0000 UTC m=+734.019704500" watchObservedRunningTime="2026-01-26 21:07:24.646723991 +0000 UTC m=+734.028312058" Jan 26 21:07:25 crc kubenswrapper[4899]: I0126 21:07:25.606194 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-pr622"] Jan 26 21:07:26 crc kubenswrapper[4899]: I0126 21:07:26.201353 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-cn6jg"] Jan 26 21:07:26 crc kubenswrapper[4899]: I0126 21:07:26.202231 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-cn6jg" Jan 26 21:07:26 crc kubenswrapper[4899]: I0126 21:07:26.215354 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-cn6jg"] Jan 26 21:07:26 crc kubenswrapper[4899]: I0126 21:07:26.371858 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twhhq\" (UniqueName: \"kubernetes.io/projected/e843f00e-9baa-4509-8226-a90bae3a2451-kube-api-access-twhhq\") pod \"mariadb-operator-index-cn6jg\" (UID: \"e843f00e-9baa-4509-8226-a90bae3a2451\") " pod="openstack-operators/mariadb-operator-index-cn6jg" Jan 26 21:07:26 crc kubenswrapper[4899]: I0126 21:07:26.473791 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twhhq\" (UniqueName: \"kubernetes.io/projected/e843f00e-9baa-4509-8226-a90bae3a2451-kube-api-access-twhhq\") pod \"mariadb-operator-index-cn6jg\" (UID: \"e843f00e-9baa-4509-8226-a90bae3a2451\") " pod="openstack-operators/mariadb-operator-index-cn6jg" Jan 26 21:07:26 crc kubenswrapper[4899]: I0126 21:07:26.492988 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twhhq\" (UniqueName: \"kubernetes.io/projected/e843f00e-9baa-4509-8226-a90bae3a2451-kube-api-access-twhhq\") pod \"mariadb-operator-index-cn6jg\" (UID: \"e843f00e-9baa-4509-8226-a90bae3a2451\") " pod="openstack-operators/mariadb-operator-index-cn6jg" Jan 26 21:07:26 crc kubenswrapper[4899]: I0126 21:07:26.558079 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-cn6jg" Jan 26 21:07:26 crc kubenswrapper[4899]: I0126 21:07:26.631778 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-pr622" podUID="43a9bc55-7e28-4ef8-93b6-72913f3fc865" containerName="registry-server" containerID="cri-o://409e488b912a36a0ac89ab7e245e927209a3f83253ba20619a5de22954ac3ee3" gracePeriod=2 Jan 26 21:07:26 crc kubenswrapper[4899]: I0126 21:07:26.825881 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-cn6jg"] Jan 26 21:07:27 crc kubenswrapper[4899]: I0126 21:07:27.013358 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-pr622" Jan 26 21:07:27 crc kubenswrapper[4899]: I0126 21:07:27.192803 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s65g2\" (UniqueName: \"kubernetes.io/projected/43a9bc55-7e28-4ef8-93b6-72913f3fc865-kube-api-access-s65g2\") pod \"43a9bc55-7e28-4ef8-93b6-72913f3fc865\" (UID: \"43a9bc55-7e28-4ef8-93b6-72913f3fc865\") " Jan 26 21:07:27 crc kubenswrapper[4899]: I0126 21:07:27.197231 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43a9bc55-7e28-4ef8-93b6-72913f3fc865-kube-api-access-s65g2" (OuterVolumeSpecName: "kube-api-access-s65g2") pod "43a9bc55-7e28-4ef8-93b6-72913f3fc865" (UID: "43a9bc55-7e28-4ef8-93b6-72913f3fc865"). InnerVolumeSpecName "kube-api-access-s65g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:07:27 crc kubenswrapper[4899]: I0126 21:07:27.294654 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s65g2\" (UniqueName: \"kubernetes.io/projected/43a9bc55-7e28-4ef8-93b6-72913f3fc865-kube-api-access-s65g2\") on node \"crc\" DevicePath \"\"" Jan 26 21:07:27 crc kubenswrapper[4899]: I0126 21:07:27.637680 4899 generic.go:334] "Generic (PLEG): container finished" podID="43a9bc55-7e28-4ef8-93b6-72913f3fc865" containerID="409e488b912a36a0ac89ab7e245e927209a3f83253ba20619a5de22954ac3ee3" exitCode=0 Jan 26 21:07:27 crc kubenswrapper[4899]: I0126 21:07:27.637788 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-pr622" Jan 26 21:07:27 crc kubenswrapper[4899]: I0126 21:07:27.637810 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-pr622" event={"ID":"43a9bc55-7e28-4ef8-93b6-72913f3fc865","Type":"ContainerDied","Data":"409e488b912a36a0ac89ab7e245e927209a3f83253ba20619a5de22954ac3ee3"} Jan 26 21:07:27 crc kubenswrapper[4899]: I0126 21:07:27.640405 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-pr622" event={"ID":"43a9bc55-7e28-4ef8-93b6-72913f3fc865","Type":"ContainerDied","Data":"93f282301105191d1fd90e961ca3a72d53a119267c5ee55cdcbf8ad3a9c86623"} Jan 26 21:07:27 crc kubenswrapper[4899]: I0126 21:07:27.640478 4899 scope.go:117] "RemoveContainer" containerID="409e488b912a36a0ac89ab7e245e927209a3f83253ba20619a5de22954ac3ee3" Jan 26 21:07:27 crc kubenswrapper[4899]: I0126 21:07:27.653779 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-cn6jg" event={"ID":"e843f00e-9baa-4509-8226-a90bae3a2451","Type":"ContainerStarted","Data":"2e112c429587ea9557c3e727de245fadb5485e691ea1d74f51f56b99129dd302"} Jan 26 21:07:27 crc kubenswrapper[4899]: I0126 21:07:27.653846 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-cn6jg" event={"ID":"e843f00e-9baa-4509-8226-a90bae3a2451","Type":"ContainerStarted","Data":"7ef4295308aaf9168d6b9697dd674b43064e81ddddd36e787483b6e5480640fc"} Jan 26 21:07:27 crc kubenswrapper[4899]: I0126 21:07:27.669756 4899 scope.go:117] "RemoveContainer" containerID="409e488b912a36a0ac89ab7e245e927209a3f83253ba20619a5de22954ac3ee3" Jan 26 21:07:27 crc kubenswrapper[4899]: E0126 21:07:27.670232 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"409e488b912a36a0ac89ab7e245e927209a3f83253ba20619a5de22954ac3ee3\": container with ID starting with 409e488b912a36a0ac89ab7e245e927209a3f83253ba20619a5de22954ac3ee3 not found: ID does not exist" containerID="409e488b912a36a0ac89ab7e245e927209a3f83253ba20619a5de22954ac3ee3" Jan 26 21:07:27 crc kubenswrapper[4899]: I0126 21:07:27.670306 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409e488b912a36a0ac89ab7e245e927209a3f83253ba20619a5de22954ac3ee3"} err="failed to get container status \"409e488b912a36a0ac89ab7e245e927209a3f83253ba20619a5de22954ac3ee3\": rpc error: code = NotFound desc = could not find container \"409e488b912a36a0ac89ab7e245e927209a3f83253ba20619a5de22954ac3ee3\": container with ID starting with 409e488b912a36a0ac89ab7e245e927209a3f83253ba20619a5de22954ac3ee3 not found: ID does not exist" Jan 26 21:07:27 crc kubenswrapper[4899]: I0126 21:07:27.670729 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-cn6jg" podStartSLOduration=1.1607733 podStartE2EDuration="1.67070732s" podCreationTimestamp="2026-01-26 21:07:26 +0000 UTC" firstStartedPulling="2026-01-26 21:07:26.839386102 +0000 UTC m=+736.220974139" lastFinishedPulling="2026-01-26 21:07:27.349320122 +0000 UTC m=+736.730908159" observedRunningTime="2026-01-26 21:07:27.669434393 +0000 UTC m=+737.051022430" watchObservedRunningTime="2026-01-26 21:07:27.67070732 +0000 UTC m=+737.052295367" Jan 26 21:07:27 crc kubenswrapper[4899]: I0126 21:07:27.690362 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-pr622"] Jan 26 21:07:27 crc kubenswrapper[4899]: I0126 21:07:27.695757 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-pr622"] Jan 26 21:07:28 crc kubenswrapper[4899]: I0126 21:07:28.938597 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43a9bc55-7e28-4ef8-93b6-72913f3fc865" path="/var/lib/kubelet/pods/43a9bc55-7e28-4ef8-93b6-72913f3fc865/volumes" Jan 26 21:07:30 crc kubenswrapper[4899]: I0126 21:07:30.109816 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:07:30 crc kubenswrapper[4899]: I0126 21:07:30.110103 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:07:36 crc kubenswrapper[4899]: I0126 21:07:36.559220 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-cn6jg" Jan 26 21:07:36 crc kubenswrapper[4899]: I0126 21:07:36.559779 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-cn6jg" Jan 26 21:07:36 crc kubenswrapper[4899]: I0126 21:07:36.590866 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-cn6jg" Jan 26 21:07:36 crc kubenswrapper[4899]: I0126 21:07:36.735714 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-cn6jg" Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.233058 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9"] Jan 26 21:07:38 crc kubenswrapper[4899]: E0126 21:07:38.234372 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a9bc55-7e28-4ef8-93b6-72913f3fc865" containerName="registry-server" Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.234462 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a9bc55-7e28-4ef8-93b6-72913f3fc865" containerName="registry-server" Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.234658 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a9bc55-7e28-4ef8-93b6-72913f3fc865" containerName="registry-server" Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.235565 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.237468 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-44wdn" Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.247739 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9"] Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.288999 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-util\") pod \"c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9\" (UID: \"8f152a72-a91c-420a-a87e-a3a5b07bfe7b\") " pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.289317 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-bundle\") pod \"c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9\" (UID: \"8f152a72-a91c-420a-a87e-a3a5b07bfe7b\") " pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.289345 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnmw7\" (UniqueName: \"kubernetes.io/projected/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-kube-api-access-rnmw7\") pod \"c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9\" (UID: \"8f152a72-a91c-420a-a87e-a3a5b07bfe7b\") " pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.390505 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-util\") pod \"c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9\" (UID: \"8f152a72-a91c-420a-a87e-a3a5b07bfe7b\") " pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.390596 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-bundle\") pod \"c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9\" (UID: \"8f152a72-a91c-420a-a87e-a3a5b07bfe7b\") " pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.390626 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnmw7\" (UniqueName: \"kubernetes.io/projected/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-kube-api-access-rnmw7\") pod \"c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9\" (UID: \"8f152a72-a91c-420a-a87e-a3a5b07bfe7b\") " pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.391184 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-util\") pod \"c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9\" (UID: \"8f152a72-a91c-420a-a87e-a3a5b07bfe7b\") " pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.391285 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-bundle\") pod \"c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9\" (UID: \"8f152a72-a91c-420a-a87e-a3a5b07bfe7b\") " pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.409872 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnmw7\" (UniqueName: \"kubernetes.io/projected/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-kube-api-access-rnmw7\") pod \"c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9\" (UID: \"8f152a72-a91c-420a-a87e-a3a5b07bfe7b\") " pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.558828 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" Jan 26 21:07:38 crc kubenswrapper[4899]: I0126 21:07:38.940524 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9"] Jan 26 21:07:38 crc kubenswrapper[4899]: W0126 21:07:38.945772 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f152a72_a91c_420a_a87e_a3a5b07bfe7b.slice/crio-28c1c30a7f788a5d7ccc995b9b3653e4e9cb52d1a5e1b02bb4b227734864a58b WatchSource:0}: Error finding container 28c1c30a7f788a5d7ccc995b9b3653e4e9cb52d1a5e1b02bb4b227734864a58b: Status 404 returned error can't find the container with id 28c1c30a7f788a5d7ccc995b9b3653e4e9cb52d1a5e1b02bb4b227734864a58b Jan 26 21:07:39 crc kubenswrapper[4899]: I0126 21:07:39.732378 4899 generic.go:334] "Generic (PLEG): container finished" podID="8f152a72-a91c-420a-a87e-a3a5b07bfe7b" containerID="30f0120d85ad97519e7148002e75cfef37d2cb51a195564b21970b667b2df9f0" exitCode=0 Jan 26 21:07:39 crc kubenswrapper[4899]: I0126 21:07:39.732981 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" event={"ID":"8f152a72-a91c-420a-a87e-a3a5b07bfe7b","Type":"ContainerDied","Data":"30f0120d85ad97519e7148002e75cfef37d2cb51a195564b21970b667b2df9f0"} Jan 26 21:07:39 crc kubenswrapper[4899]: I0126 21:07:39.734904 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" event={"ID":"8f152a72-a91c-420a-a87e-a3a5b07bfe7b","Type":"ContainerStarted","Data":"28c1c30a7f788a5d7ccc995b9b3653e4e9cb52d1a5e1b02bb4b227734864a58b"} Jan 26 21:07:40 crc kubenswrapper[4899]: I0126 21:07:40.742184 4899 generic.go:334] "Generic (PLEG): container finished" podID="8f152a72-a91c-420a-a87e-a3a5b07bfe7b" containerID="4254f5894fc9842672915c313d09f71455e399b0a67f6b8ea50bcd9dc9a61926" exitCode=0 Jan 26 21:07:40 crc kubenswrapper[4899]: I0126 21:07:40.742240 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" event={"ID":"8f152a72-a91c-420a-a87e-a3a5b07bfe7b","Type":"ContainerDied","Data":"4254f5894fc9842672915c313d09f71455e399b0a67f6b8ea50bcd9dc9a61926"} Jan 26 21:07:41 crc kubenswrapper[4899]: I0126 21:07:41.809369 4899 generic.go:334] "Generic (PLEG): container finished" podID="8f152a72-a91c-420a-a87e-a3a5b07bfe7b" containerID="a8a76e249db1ad6589f4bae5c7f2b45259c587258f709853efa84ca116055c61" exitCode=0 Jan 26 21:07:41 crc kubenswrapper[4899]: I0126 21:07:41.809437 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" event={"ID":"8f152a72-a91c-420a-a87e-a3a5b07bfe7b","Type":"ContainerDied","Data":"a8a76e249db1ad6589f4bae5c7f2b45259c587258f709853efa84ca116055c61"} Jan 26 21:07:43 crc kubenswrapper[4899]: I0126 21:07:43.128792 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" Jan 26 21:07:43 crc kubenswrapper[4899]: I0126 21:07:43.253422 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-util\") pod \"8f152a72-a91c-420a-a87e-a3a5b07bfe7b\" (UID: \"8f152a72-a91c-420a-a87e-a3a5b07bfe7b\") " Jan 26 21:07:43 crc kubenswrapper[4899]: I0126 21:07:43.253534 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-bundle\") pod \"8f152a72-a91c-420a-a87e-a3a5b07bfe7b\" (UID: \"8f152a72-a91c-420a-a87e-a3a5b07bfe7b\") " Jan 26 21:07:43 crc kubenswrapper[4899]: I0126 21:07:43.253605 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnmw7\" (UniqueName: \"kubernetes.io/projected/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-kube-api-access-rnmw7\") pod \"8f152a72-a91c-420a-a87e-a3a5b07bfe7b\" (UID: \"8f152a72-a91c-420a-a87e-a3a5b07bfe7b\") " Jan 26 21:07:43 crc kubenswrapper[4899]: I0126 21:07:43.254760 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-bundle" (OuterVolumeSpecName: "bundle") pod "8f152a72-a91c-420a-a87e-a3a5b07bfe7b" (UID: "8f152a72-a91c-420a-a87e-a3a5b07bfe7b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:07:43 crc kubenswrapper[4899]: I0126 21:07:43.262684 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-kube-api-access-rnmw7" (OuterVolumeSpecName: "kube-api-access-rnmw7") pod "8f152a72-a91c-420a-a87e-a3a5b07bfe7b" (UID: "8f152a72-a91c-420a-a87e-a3a5b07bfe7b"). InnerVolumeSpecName "kube-api-access-rnmw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:07:43 crc kubenswrapper[4899]: I0126 21:07:43.288089 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-util" (OuterVolumeSpecName: "util") pod "8f152a72-a91c-420a-a87e-a3a5b07bfe7b" (UID: "8f152a72-a91c-420a-a87e-a3a5b07bfe7b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:07:43 crc kubenswrapper[4899]: I0126 21:07:43.355541 4899 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 21:07:43 crc kubenswrapper[4899]: I0126 21:07:43.355596 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnmw7\" (UniqueName: \"kubernetes.io/projected/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-kube-api-access-rnmw7\") on node \"crc\" DevicePath \"\"" Jan 26 21:07:43 crc kubenswrapper[4899]: I0126 21:07:43.355620 4899 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f152a72-a91c-420a-a87e-a3a5b07bfe7b-util\") on node \"crc\" DevicePath \"\"" Jan 26 21:07:43 crc kubenswrapper[4899]: I0126 21:07:43.824579 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" event={"ID":"8f152a72-a91c-420a-a87e-a3a5b07bfe7b","Type":"ContainerDied","Data":"28c1c30a7f788a5d7ccc995b9b3653e4e9cb52d1a5e1b02bb4b227734864a58b"} Jan 26 21:07:43 crc kubenswrapper[4899]: I0126 21:07:43.825038 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28c1c30a7f788a5d7ccc995b9b3653e4e9cb52d1a5e1b02bb4b227734864a58b" Jan 26 21:07:43 crc kubenswrapper[4899]: I0126 21:07:43.824865 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.569049 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j"] Jan 26 21:07:49 crc kubenswrapper[4899]: E0126 21:07:49.569501 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f152a72-a91c-420a-a87e-a3a5b07bfe7b" containerName="pull" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.569516 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f152a72-a91c-420a-a87e-a3a5b07bfe7b" containerName="pull" Jan 26 21:07:49 crc kubenswrapper[4899]: E0126 21:07:49.569531 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f152a72-a91c-420a-a87e-a3a5b07bfe7b" containerName="util" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.569536 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f152a72-a91c-420a-a87e-a3a5b07bfe7b" containerName="util" Jan 26 21:07:49 crc kubenswrapper[4899]: E0126 21:07:49.569558 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f152a72-a91c-420a-a87e-a3a5b07bfe7b" containerName="extract" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.569564 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f152a72-a91c-420a-a87e-a3a5b07bfe7b" containerName="extract" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.569652 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f152a72-a91c-420a-a87e-a3a5b07bfe7b" containerName="extract" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.570039 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.572181 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.572474 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-jzdrs" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.573448 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.626429 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j"] Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.741084 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8rt4\" (UniqueName: \"kubernetes.io/projected/06708d72-0e7f-4c79-b25e-09103c6e3fc4-kube-api-access-t8rt4\") pod \"mariadb-operator-controller-manager-7d8d94bbd6-zn79j\" (UID: \"06708d72-0e7f-4c79-b25e-09103c6e3fc4\") " pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.741148 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06708d72-0e7f-4c79-b25e-09103c6e3fc4-apiservice-cert\") pod \"mariadb-operator-controller-manager-7d8d94bbd6-zn79j\" (UID: \"06708d72-0e7f-4c79-b25e-09103c6e3fc4\") " pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.741172 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06708d72-0e7f-4c79-b25e-09103c6e3fc4-webhook-cert\") pod \"mariadb-operator-controller-manager-7d8d94bbd6-zn79j\" (UID: \"06708d72-0e7f-4c79-b25e-09103c6e3fc4\") " pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.841612 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8rt4\" (UniqueName: \"kubernetes.io/projected/06708d72-0e7f-4c79-b25e-09103c6e3fc4-kube-api-access-t8rt4\") pod \"mariadb-operator-controller-manager-7d8d94bbd6-zn79j\" (UID: \"06708d72-0e7f-4c79-b25e-09103c6e3fc4\") " pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.841673 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06708d72-0e7f-4c79-b25e-09103c6e3fc4-apiservice-cert\") pod \"mariadb-operator-controller-manager-7d8d94bbd6-zn79j\" (UID: \"06708d72-0e7f-4c79-b25e-09103c6e3fc4\") " pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.841695 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06708d72-0e7f-4c79-b25e-09103c6e3fc4-webhook-cert\") pod \"mariadb-operator-controller-manager-7d8d94bbd6-zn79j\" (UID: \"06708d72-0e7f-4c79-b25e-09103c6e3fc4\") " pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.848008 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06708d72-0e7f-4c79-b25e-09103c6e3fc4-webhook-cert\") pod \"mariadb-operator-controller-manager-7d8d94bbd6-zn79j\" (UID: \"06708d72-0e7f-4c79-b25e-09103c6e3fc4\") " pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.848800 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06708d72-0e7f-4c79-b25e-09103c6e3fc4-apiservice-cert\") pod \"mariadb-operator-controller-manager-7d8d94bbd6-zn79j\" (UID: \"06708d72-0e7f-4c79-b25e-09103c6e3fc4\") " pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.863511 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8rt4\" (UniqueName: \"kubernetes.io/projected/06708d72-0e7f-4c79-b25e-09103c6e3fc4-kube-api-access-t8rt4\") pod \"mariadb-operator-controller-manager-7d8d94bbd6-zn79j\" (UID: \"06708d72-0e7f-4c79-b25e-09103c6e3fc4\") " pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" Jan 26 21:07:49 crc kubenswrapper[4899]: I0126 21:07:49.894834 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" Jan 26 21:07:50 crc kubenswrapper[4899]: I0126 21:07:50.111628 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j"] Jan 26 21:07:50 crc kubenswrapper[4899]: W0126 21:07:50.120258 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06708d72_0e7f_4c79_b25e_09103c6e3fc4.slice/crio-8e53c23862858b34c191f5725afa2f4f9c62be7df22f6ecb8bec581d5335b26f WatchSource:0}: Error finding container 8e53c23862858b34c191f5725afa2f4f9c62be7df22f6ecb8bec581d5335b26f: Status 404 returned error can't find the container with id 8e53c23862858b34c191f5725afa2f4f9c62be7df22f6ecb8bec581d5335b26f Jan 26 21:07:50 crc kubenswrapper[4899]: I0126 21:07:50.877683 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" event={"ID":"06708d72-0e7f-4c79-b25e-09103c6e3fc4","Type":"ContainerStarted","Data":"8e53c23862858b34c191f5725afa2f4f9c62be7df22f6ecb8bec581d5335b26f"} Jan 26 21:07:52 crc kubenswrapper[4899]: I0126 21:07:52.394032 4899 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 21:07:54 crc kubenswrapper[4899]: I0126 21:07:54.913050 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" event={"ID":"06708d72-0e7f-4c79-b25e-09103c6e3fc4","Type":"ContainerStarted","Data":"c47c9288ec9afbe71c7f7a4267722948f45ffe33cd7a2b0e9526ae3ec6315375"} Jan 26 21:07:54 crc kubenswrapper[4899]: I0126 21:07:54.913586 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" Jan 26 21:07:54 crc kubenswrapper[4899]: I0126 21:07:54.937287 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" podStartSLOduration=1.697364167 podStartE2EDuration="5.937267632s" podCreationTimestamp="2026-01-26 21:07:49 +0000 UTC" firstStartedPulling="2026-01-26 21:07:50.122662813 +0000 UTC m=+759.504250860" lastFinishedPulling="2026-01-26 21:07:54.362566288 +0000 UTC m=+763.744154325" observedRunningTime="2026-01-26 21:07:54.929629402 +0000 UTC m=+764.311217449" watchObservedRunningTime="2026-01-26 21:07:54.937267632 +0000 UTC m=+764.318855669" Jan 26 21:07:59 crc kubenswrapper[4899]: I0126 21:07:59.903168 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" Jan 26 21:08:00 crc kubenswrapper[4899]: I0126 21:08:00.109346 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:08:00 crc kubenswrapper[4899]: I0126 21:08:00.109459 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:08:07 crc kubenswrapper[4899]: I0126 21:08:07.016311 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-zwkbt"] Jan 26 21:08:07 crc kubenswrapper[4899]: I0126 21:08:07.017629 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-zwkbt" Jan 26 21:08:07 crc kubenswrapper[4899]: I0126 21:08:07.025391 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-zwkbt"] Jan 26 21:08:07 crc kubenswrapper[4899]: I0126 21:08:07.027436 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-7bsbz" Jan 26 21:08:07 crc kubenswrapper[4899]: I0126 21:08:07.078909 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7czph\" (UniqueName: \"kubernetes.io/projected/2aac220d-7b0b-497f-a04b-a83eface364f-kube-api-access-7czph\") pod \"infra-operator-index-zwkbt\" (UID: \"2aac220d-7b0b-497f-a04b-a83eface364f\") " pod="openstack-operators/infra-operator-index-zwkbt" Jan 26 21:08:07 crc kubenswrapper[4899]: I0126 21:08:07.179939 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7czph\" (UniqueName: \"kubernetes.io/projected/2aac220d-7b0b-497f-a04b-a83eface364f-kube-api-access-7czph\") pod \"infra-operator-index-zwkbt\" (UID: \"2aac220d-7b0b-497f-a04b-a83eface364f\") " pod="openstack-operators/infra-operator-index-zwkbt" Jan 26 21:08:07 crc kubenswrapper[4899]: I0126 21:08:07.199533 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7czph\" (UniqueName: \"kubernetes.io/projected/2aac220d-7b0b-497f-a04b-a83eface364f-kube-api-access-7czph\") pod \"infra-operator-index-zwkbt\" (UID: \"2aac220d-7b0b-497f-a04b-a83eface364f\") " pod="openstack-operators/infra-operator-index-zwkbt" Jan 26 21:08:07 crc kubenswrapper[4899]: I0126 21:08:07.344978 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-zwkbt" Jan 26 21:08:07 crc kubenswrapper[4899]: I0126 21:08:07.587235 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-zwkbt"] Jan 26 21:08:08 crc kubenswrapper[4899]: I0126 21:08:08.001251 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-zwkbt" event={"ID":"2aac220d-7b0b-497f-a04b-a83eface364f","Type":"ContainerStarted","Data":"de6894898d480cf3b39c519e206d8dfb0ddabcd91980e74f10fcdc56f714387f"} Jan 26 21:08:09 crc kubenswrapper[4899]: I0126 21:08:09.012682 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-zwkbt" event={"ID":"2aac220d-7b0b-497f-a04b-a83eface364f","Type":"ContainerStarted","Data":"04d2146f6a55623bd6fb85f1159f3380ac4b5b80e8a69617c9b10928dfb9123c"} Jan 26 21:08:09 crc kubenswrapper[4899]: I0126 21:08:09.032251 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-zwkbt" podStartSLOduration=2.157288338 podStartE2EDuration="3.032228711s" podCreationTimestamp="2026-01-26 21:08:06 +0000 UTC" firstStartedPulling="2026-01-26 21:08:07.607617834 +0000 UTC m=+776.989205861" lastFinishedPulling="2026-01-26 21:08:08.482558187 +0000 UTC m=+777.864146234" observedRunningTime="2026-01-26 21:08:09.026509387 +0000 UTC m=+778.408097434" watchObservedRunningTime="2026-01-26 21:08:09.032228711 +0000 UTC m=+778.413816758" Jan 26 21:08:10 crc kubenswrapper[4899]: I0126 21:08:10.799347 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-zwkbt"] Jan 26 21:08:11 crc kubenswrapper[4899]: I0126 21:08:11.024977 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-zwkbt" podUID="2aac220d-7b0b-497f-a04b-a83eface364f" containerName="registry-server" containerID="cri-o://04d2146f6a55623bd6fb85f1159f3380ac4b5b80e8a69617c9b10928dfb9123c" gracePeriod=2 Jan 26 21:08:11 crc kubenswrapper[4899]: I0126 21:08:11.382726 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-zwkbt" Jan 26 21:08:11 crc kubenswrapper[4899]: I0126 21:08:11.427211 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-whj5n"] Jan 26 21:08:11 crc kubenswrapper[4899]: E0126 21:08:11.427905 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aac220d-7b0b-497f-a04b-a83eface364f" containerName="registry-server" Jan 26 21:08:11 crc kubenswrapper[4899]: I0126 21:08:11.428016 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aac220d-7b0b-497f-a04b-a83eface364f" containerName="registry-server" Jan 26 21:08:11 crc kubenswrapper[4899]: I0126 21:08:11.428553 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aac220d-7b0b-497f-a04b-a83eface364f" containerName="registry-server" Jan 26 21:08:11 crc kubenswrapper[4899]: I0126 21:08:11.430071 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-whj5n" Jan 26 21:08:11 crc kubenswrapper[4899]: I0126 21:08:11.434812 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-whj5n"] Jan 26 21:08:11 crc kubenswrapper[4899]: I0126 21:08:11.540114 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7czph\" (UniqueName: \"kubernetes.io/projected/2aac220d-7b0b-497f-a04b-a83eface364f-kube-api-access-7czph\") pod \"2aac220d-7b0b-497f-a04b-a83eface364f\" (UID: \"2aac220d-7b0b-497f-a04b-a83eface364f\") " Jan 26 21:08:11 crc kubenswrapper[4899]: I0126 21:08:11.540751 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c66gj\" (UniqueName: \"kubernetes.io/projected/8b6455e9-9d16-4177-a060-0f72c68f12e2-kube-api-access-c66gj\") pod \"infra-operator-index-whj5n\" (UID: \"8b6455e9-9d16-4177-a060-0f72c68f12e2\") " pod="openstack-operators/infra-operator-index-whj5n" Jan 26 21:08:11 crc kubenswrapper[4899]: I0126 21:08:11.544730 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aac220d-7b0b-497f-a04b-a83eface364f-kube-api-access-7czph" (OuterVolumeSpecName: "kube-api-access-7czph") pod "2aac220d-7b0b-497f-a04b-a83eface364f" (UID: "2aac220d-7b0b-497f-a04b-a83eface364f"). InnerVolumeSpecName "kube-api-access-7czph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:08:11 crc kubenswrapper[4899]: I0126 21:08:11.641870 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c66gj\" (UniqueName: \"kubernetes.io/projected/8b6455e9-9d16-4177-a060-0f72c68f12e2-kube-api-access-c66gj\") pod \"infra-operator-index-whj5n\" (UID: \"8b6455e9-9d16-4177-a060-0f72c68f12e2\") " pod="openstack-operators/infra-operator-index-whj5n" Jan 26 21:08:11 crc kubenswrapper[4899]: I0126 21:08:11.642122 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7czph\" (UniqueName: \"kubernetes.io/projected/2aac220d-7b0b-497f-a04b-a83eface364f-kube-api-access-7czph\") on node \"crc\" DevicePath \"\"" Jan 26 21:08:11 crc kubenswrapper[4899]: I0126 21:08:11.660156 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c66gj\" (UniqueName: \"kubernetes.io/projected/8b6455e9-9d16-4177-a060-0f72c68f12e2-kube-api-access-c66gj\") pod \"infra-operator-index-whj5n\" (UID: \"8b6455e9-9d16-4177-a060-0f72c68f12e2\") " pod="openstack-operators/infra-operator-index-whj5n" Jan 26 21:08:11 crc kubenswrapper[4899]: I0126 21:08:11.755719 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-whj5n" Jan 26 21:08:12 crc kubenswrapper[4899]: I0126 21:08:12.034357 4899 generic.go:334] "Generic (PLEG): container finished" podID="2aac220d-7b0b-497f-a04b-a83eface364f" containerID="04d2146f6a55623bd6fb85f1159f3380ac4b5b80e8a69617c9b10928dfb9123c" exitCode=0 Jan 26 21:08:12 crc kubenswrapper[4899]: I0126 21:08:12.034397 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-zwkbt" event={"ID":"2aac220d-7b0b-497f-a04b-a83eface364f","Type":"ContainerDied","Data":"04d2146f6a55623bd6fb85f1159f3380ac4b5b80e8a69617c9b10928dfb9123c"} Jan 26 21:08:12 crc kubenswrapper[4899]: I0126 21:08:12.034434 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-zwkbt" event={"ID":"2aac220d-7b0b-497f-a04b-a83eface364f","Type":"ContainerDied","Data":"de6894898d480cf3b39c519e206d8dfb0ddabcd91980e74f10fcdc56f714387f"} Jan 26 21:08:12 crc kubenswrapper[4899]: I0126 21:08:12.034433 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-zwkbt" Jan 26 21:08:12 crc kubenswrapper[4899]: I0126 21:08:12.034472 4899 scope.go:117] "RemoveContainer" containerID="04d2146f6a55623bd6fb85f1159f3380ac4b5b80e8a69617c9b10928dfb9123c" Jan 26 21:08:12 crc kubenswrapper[4899]: I0126 21:08:12.044407 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-whj5n"] Jan 26 21:08:12 crc kubenswrapper[4899]: I0126 21:08:12.053892 4899 scope.go:117] "RemoveContainer" containerID="04d2146f6a55623bd6fb85f1159f3380ac4b5b80e8a69617c9b10928dfb9123c" Jan 26 21:08:12 crc kubenswrapper[4899]: E0126 21:08:12.054370 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04d2146f6a55623bd6fb85f1159f3380ac4b5b80e8a69617c9b10928dfb9123c\": container with ID starting with 04d2146f6a55623bd6fb85f1159f3380ac4b5b80e8a69617c9b10928dfb9123c not found: ID does not exist" containerID="04d2146f6a55623bd6fb85f1159f3380ac4b5b80e8a69617c9b10928dfb9123c" Jan 26 21:08:12 crc kubenswrapper[4899]: I0126 21:08:12.054408 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04d2146f6a55623bd6fb85f1159f3380ac4b5b80e8a69617c9b10928dfb9123c"} err="failed to get container status \"04d2146f6a55623bd6fb85f1159f3380ac4b5b80e8a69617c9b10928dfb9123c\": rpc error: code = NotFound desc = could not find container \"04d2146f6a55623bd6fb85f1159f3380ac4b5b80e8a69617c9b10928dfb9123c\": container with ID starting with 04d2146f6a55623bd6fb85f1159f3380ac4b5b80e8a69617c9b10928dfb9123c not found: ID does not exist" Jan 26 21:08:12 crc kubenswrapper[4899]: I0126 21:08:12.065175 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-zwkbt"] Jan 26 21:08:12 crc kubenswrapper[4899]: I0126 21:08:12.068611 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-zwkbt"] Jan 26 21:08:12 crc kubenswrapper[4899]: I0126 21:08:12.942877 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aac220d-7b0b-497f-a04b-a83eface364f" path="/var/lib/kubelet/pods/2aac220d-7b0b-497f-a04b-a83eface364f/volumes" Jan 26 21:08:13 crc kubenswrapper[4899]: I0126 21:08:13.043757 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-whj5n" event={"ID":"8b6455e9-9d16-4177-a060-0f72c68f12e2","Type":"ContainerStarted","Data":"b40b74fc57e5d3eae02f2615d03f5d479e5b7cc86e1a2f01b73b8f79344b27c7"} Jan 26 21:08:13 crc kubenswrapper[4899]: I0126 21:08:13.043811 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-whj5n" event={"ID":"8b6455e9-9d16-4177-a060-0f72c68f12e2","Type":"ContainerStarted","Data":"24506e9ab5da1d16a0ee95595471ebd2f4b8a53b9ee39cab6ee360f2dfdde282"} Jan 26 21:08:13 crc kubenswrapper[4899]: I0126 21:08:13.072862 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-whj5n" podStartSLOduration=1.6356127900000001 podStartE2EDuration="2.072841625s" podCreationTimestamp="2026-01-26 21:08:11 +0000 UTC" firstStartedPulling="2026-01-26 21:08:12.066824246 +0000 UTC m=+781.448412273" lastFinishedPulling="2026-01-26 21:08:12.504053021 +0000 UTC m=+781.885641108" observedRunningTime="2026-01-26 21:08:13.069342914 +0000 UTC m=+782.450930961" watchObservedRunningTime="2026-01-26 21:08:13.072841625 +0000 UTC m=+782.454429672" Jan 26 21:08:21 crc kubenswrapper[4899]: I0126 21:08:21.756089 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-whj5n" Jan 26 21:08:21 crc kubenswrapper[4899]: I0126 21:08:21.756599 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-whj5n" Jan 26 21:08:21 crc kubenswrapper[4899]: I0126 21:08:21.794403 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-whj5n" Jan 26 21:08:22 crc kubenswrapper[4899]: I0126 21:08:22.170045 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-whj5n" Jan 26 21:08:24 crc kubenswrapper[4899]: I0126 21:08:24.892875 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw"] Jan 26 21:08:24 crc kubenswrapper[4899]: I0126 21:08:24.894665 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" Jan 26 21:08:24 crc kubenswrapper[4899]: I0126 21:08:24.896998 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-44wdn" Jan 26 21:08:24 crc kubenswrapper[4899]: I0126 21:08:24.914375 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw"] Jan 26 21:08:24 crc kubenswrapper[4899]: I0126 21:08:24.954809 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/77881c29-649c-4e59-8c20-8d468f552536-bundle\") pod \"ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw\" (UID: \"77881c29-649c-4e59-8c20-8d468f552536\") " pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" Jan 26 21:08:24 crc kubenswrapper[4899]: I0126 21:08:24.954864 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/77881c29-649c-4e59-8c20-8d468f552536-util\") pod \"ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw\" (UID: \"77881c29-649c-4e59-8c20-8d468f552536\") " pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" Jan 26 21:08:24 crc kubenswrapper[4899]: I0126 21:08:24.954959 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6ksr\" (UniqueName: \"kubernetes.io/projected/77881c29-649c-4e59-8c20-8d468f552536-kube-api-access-n6ksr\") pod \"ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw\" (UID: \"77881c29-649c-4e59-8c20-8d468f552536\") " pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" Jan 26 21:08:25 crc kubenswrapper[4899]: I0126 21:08:25.055850 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/77881c29-649c-4e59-8c20-8d468f552536-bundle\") pod \"ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw\" (UID: \"77881c29-649c-4e59-8c20-8d468f552536\") " pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" Jan 26 21:08:25 crc kubenswrapper[4899]: I0126 21:08:25.055906 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/77881c29-649c-4e59-8c20-8d468f552536-util\") pod \"ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw\" (UID: \"77881c29-649c-4e59-8c20-8d468f552536\") " pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" Jan 26 21:08:25 crc kubenswrapper[4899]: I0126 21:08:25.055981 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6ksr\" (UniqueName: \"kubernetes.io/projected/77881c29-649c-4e59-8c20-8d468f552536-kube-api-access-n6ksr\") pod \"ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw\" (UID: \"77881c29-649c-4e59-8c20-8d468f552536\") " pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" Jan 26 21:08:25 crc kubenswrapper[4899]: I0126 21:08:25.056329 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/77881c29-649c-4e59-8c20-8d468f552536-bundle\") pod \"ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw\" (UID: \"77881c29-649c-4e59-8c20-8d468f552536\") " pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" Jan 26 21:08:25 crc kubenswrapper[4899]: I0126 21:08:25.056418 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/77881c29-649c-4e59-8c20-8d468f552536-util\") pod \"ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw\" (UID: \"77881c29-649c-4e59-8c20-8d468f552536\") " pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" Jan 26 21:08:25 crc kubenswrapper[4899]: I0126 21:08:25.073708 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6ksr\" (UniqueName: \"kubernetes.io/projected/77881c29-649c-4e59-8c20-8d468f552536-kube-api-access-n6ksr\") pod \"ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw\" (UID: \"77881c29-649c-4e59-8c20-8d468f552536\") " pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" Jan 26 21:08:25 crc kubenswrapper[4899]: I0126 21:08:25.212116 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" Jan 26 21:08:25 crc kubenswrapper[4899]: I0126 21:08:25.669598 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw"] Jan 26 21:08:26 crc kubenswrapper[4899]: I0126 21:08:26.171628 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" event={"ID":"77881c29-649c-4e59-8c20-8d468f552536","Type":"ContainerStarted","Data":"240e04d6f56398f3b89184c83dc16629ee37c4e7b337d6adc5890a50e7b0a354"} Jan 26 21:08:27 crc kubenswrapper[4899]: I0126 21:08:27.180683 4899 generic.go:334] "Generic (PLEG): container finished" podID="77881c29-649c-4e59-8c20-8d468f552536" containerID="37c703bff6a0059d609e6116d28217fa8b8b28b3a53ad45bb7f1275b2bd1446d" exitCode=0 Jan 26 21:08:27 crc kubenswrapper[4899]: I0126 21:08:27.180752 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" event={"ID":"77881c29-649c-4e59-8c20-8d468f552536","Type":"ContainerDied","Data":"37c703bff6a0059d609e6116d28217fa8b8b28b3a53ad45bb7f1275b2bd1446d"} Jan 26 21:08:29 crc kubenswrapper[4899]: I0126 21:08:29.207195 4899 generic.go:334] "Generic (PLEG): container finished" podID="77881c29-649c-4e59-8c20-8d468f552536" containerID="29e24ef8ab52a7de4c3756110d4fe7fba6266bac920ee6b372cc6279374069e0" exitCode=0 Jan 26 21:08:29 crc kubenswrapper[4899]: I0126 21:08:29.207666 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" event={"ID":"77881c29-649c-4e59-8c20-8d468f552536","Type":"ContainerDied","Data":"29e24ef8ab52a7de4c3756110d4fe7fba6266bac920ee6b372cc6279374069e0"} Jan 26 21:08:30 crc kubenswrapper[4899]: I0126 21:08:30.109574 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:08:30 crc kubenswrapper[4899]: I0126 21:08:30.110077 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:08:30 crc kubenswrapper[4899]: I0126 21:08:30.110164 4899 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 21:08:30 crc kubenswrapper[4899]: I0126 21:08:30.111718 4899 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d398f4687edca03fbdacf22de0045af2ea5d8affbf070e2faa1f8131fff946bc"} pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 21:08:30 crc kubenswrapper[4899]: I0126 21:08:30.112667 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" containerID="cri-o://d398f4687edca03fbdacf22de0045af2ea5d8affbf070e2faa1f8131fff946bc" gracePeriod=600 Jan 26 21:08:30 crc kubenswrapper[4899]: I0126 21:08:30.221991 4899 generic.go:334] "Generic (PLEG): container finished" podID="77881c29-649c-4e59-8c20-8d468f552536" containerID="3a0e9779a42c2b693f9013ebcb4445da85ff2ddd0bfbe08fbf47f3bb7cfa969a" exitCode=0 Jan 26 21:08:30 crc kubenswrapper[4899]: I0126 21:08:30.222038 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" event={"ID":"77881c29-649c-4e59-8c20-8d468f552536","Type":"ContainerDied","Data":"3a0e9779a42c2b693f9013ebcb4445da85ff2ddd0bfbe08fbf47f3bb7cfa969a"} Jan 26 21:08:31 crc kubenswrapper[4899]: I0126 21:08:31.230823 4899 generic.go:334] "Generic (PLEG): container finished" podID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerID="d398f4687edca03fbdacf22de0045af2ea5d8affbf070e2faa1f8131fff946bc" exitCode=0 Jan 26 21:08:31 crc kubenswrapper[4899]: I0126 21:08:31.230904 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerDied","Data":"d398f4687edca03fbdacf22de0045af2ea5d8affbf070e2faa1f8131fff946bc"} Jan 26 21:08:31 crc kubenswrapper[4899]: I0126 21:08:31.231359 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerStarted","Data":"aa9a721fc5929ae1bb2ab8e526b3f1d389e06cec08eea583da85a23029b223fe"} Jan 26 21:08:31 crc kubenswrapper[4899]: I0126 21:08:31.231398 4899 scope.go:117] "RemoveContainer" containerID="c018a7f69a4a011503be63a0439d6960fe854a979779cb714695f295f40f4476" Jan 26 21:08:31 crc kubenswrapper[4899]: I0126 21:08:31.559907 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" Jan 26 21:08:31 crc kubenswrapper[4899]: I0126 21:08:31.759256 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/77881c29-649c-4e59-8c20-8d468f552536-bundle\") pod \"77881c29-649c-4e59-8c20-8d468f552536\" (UID: \"77881c29-649c-4e59-8c20-8d468f552536\") " Jan 26 21:08:31 crc kubenswrapper[4899]: I0126 21:08:31.759322 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6ksr\" (UniqueName: \"kubernetes.io/projected/77881c29-649c-4e59-8c20-8d468f552536-kube-api-access-n6ksr\") pod \"77881c29-649c-4e59-8c20-8d468f552536\" (UID: \"77881c29-649c-4e59-8c20-8d468f552536\") " Jan 26 21:08:31 crc kubenswrapper[4899]: I0126 21:08:31.759431 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/77881c29-649c-4e59-8c20-8d468f552536-util\") pod \"77881c29-649c-4e59-8c20-8d468f552536\" (UID: \"77881c29-649c-4e59-8c20-8d468f552536\") " Jan 26 21:08:31 crc kubenswrapper[4899]: I0126 21:08:31.762080 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77881c29-649c-4e59-8c20-8d468f552536-bundle" (OuterVolumeSpecName: "bundle") pod "77881c29-649c-4e59-8c20-8d468f552536" (UID: "77881c29-649c-4e59-8c20-8d468f552536"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:08:31 crc kubenswrapper[4899]: I0126 21:08:31.769542 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77881c29-649c-4e59-8c20-8d468f552536-kube-api-access-n6ksr" (OuterVolumeSpecName: "kube-api-access-n6ksr") pod "77881c29-649c-4e59-8c20-8d468f552536" (UID: "77881c29-649c-4e59-8c20-8d468f552536"). InnerVolumeSpecName "kube-api-access-n6ksr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:08:31 crc kubenswrapper[4899]: I0126 21:08:31.861124 4899 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/77881c29-649c-4e59-8c20-8d468f552536-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 21:08:31 crc kubenswrapper[4899]: I0126 21:08:31.861462 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6ksr\" (UniqueName: \"kubernetes.io/projected/77881c29-649c-4e59-8c20-8d468f552536-kube-api-access-n6ksr\") on node \"crc\" DevicePath \"\"" Jan 26 21:08:31 crc kubenswrapper[4899]: I0126 21:08:31.959906 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77881c29-649c-4e59-8c20-8d468f552536-util" (OuterVolumeSpecName: "util") pod "77881c29-649c-4e59-8c20-8d468f552536" (UID: "77881c29-649c-4e59-8c20-8d468f552536"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:08:31 crc kubenswrapper[4899]: I0126 21:08:31.962921 4899 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/77881c29-649c-4e59-8c20-8d468f552536-util\") on node \"crc\" DevicePath \"\"" Jan 26 21:08:32 crc kubenswrapper[4899]: I0126 21:08:32.241800 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" event={"ID":"77881c29-649c-4e59-8c20-8d468f552536","Type":"ContainerDied","Data":"240e04d6f56398f3b89184c83dc16629ee37c4e7b337d6adc5890a50e7b0a354"} Jan 26 21:08:32 crc kubenswrapper[4899]: I0126 21:08:32.241864 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="240e04d6f56398f3b89184c83dc16629ee37c4e7b337d6adc5890a50e7b0a354" Jan 26 21:08:32 crc kubenswrapper[4899]: I0126 21:08:32.241835 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.273897 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/openstack-galera-0"] Jan 26 21:08:36 crc kubenswrapper[4899]: E0126 21:08:36.275782 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77881c29-649c-4e59-8c20-8d468f552536" containerName="pull" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.275879 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="77881c29-649c-4e59-8c20-8d468f552536" containerName="pull" Jan 26 21:08:36 crc kubenswrapper[4899]: E0126 21:08:36.275988 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77881c29-649c-4e59-8c20-8d468f552536" containerName="util" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.276101 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="77881c29-649c-4e59-8c20-8d468f552536" containerName="util" Jan 26 21:08:36 crc kubenswrapper[4899]: E0126 21:08:36.276207 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77881c29-649c-4e59-8c20-8d468f552536" containerName="extract" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.276278 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="77881c29-649c-4e59-8c20-8d468f552536" containerName="extract" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.276476 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="77881c29-649c-4e59-8c20-8d468f552536" containerName="extract" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.277390 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.280011 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"galera-openstack-dockercfg-tl87c" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.280220 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"manila-kuttl-tests"/"openstack-config-data" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.280240 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"manila-kuttl-tests"/"openshift-service-ca.crt" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.283604 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"manila-kuttl-tests"/"openstack-scripts" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.286641 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"manila-kuttl-tests"/"kube-root-ca.crt" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.293836 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/openstack-galera-0"] Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.309125 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/openstack-galera-1"] Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.310326 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.313037 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/openstack-galera-2"] Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.316385 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.320162 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/openstack-galera-2"] Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.331518 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/openstack-galera-1"] Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.421219 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-kolla-config\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.421282 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr6rg\" (UniqueName: \"kubernetes.io/projected/e1149d0e-e93d-496a-9022-51fa77168394-kube-api-access-lr6rg\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.421370 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.421414 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.421667 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-operator-scripts\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.421767 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/93293cee-6c86-4865-8a19-b43659a851f3-config-data-generated\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.421808 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-config-data-default\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.421877 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-operator-scripts\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.422007 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d25306a-7534-45dc-a752-efdb1bb3c2f8-config-data-generated\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.422045 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-config-data-default\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.422075 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-config-data-default\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.422147 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e1149d0e-e93d-496a-9022-51fa77168394-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.422211 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.422305 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-kolla-config\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.422424 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhr68\" (UniqueName: \"kubernetes.io/projected/93293cee-6c86-4865-8a19-b43659a851f3-kube-api-access-mhr68\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.422529 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-kolla-config\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.422633 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.422679 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfjwm\" (UniqueName: \"kubernetes.io/projected/9d25306a-7534-45dc-a752-efdb1bb3c2f8-kube-api-access-jfjwm\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.524438 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-kolla-config\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.524538 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.524585 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfjwm\" (UniqueName: \"kubernetes.io/projected/9d25306a-7534-45dc-a752-efdb1bb3c2f8-kube-api-access-jfjwm\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.524634 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-kolla-config\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.524674 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr6rg\" (UniqueName: \"kubernetes.io/projected/e1149d0e-e93d-496a-9022-51fa77168394-kube-api-access-lr6rg\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.524710 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.524744 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.524782 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-operator-scripts\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.524813 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/93293cee-6c86-4865-8a19-b43659a851f3-config-data-generated\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.524846 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-config-data-default\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.524873 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-operator-scripts\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.525124 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d25306a-7534-45dc-a752-efdb1bb3c2f8-config-data-generated\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.525139 4899 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") device mount path \"/mnt/openstack/pv02\"" pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.525153 4899 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") device mount path \"/mnt/openstack/pv08\"" pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.525153 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-config-data-default\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.525645 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/93293cee-6c86-4865-8a19-b43659a851f3-config-data-generated\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.525672 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d25306a-7534-45dc-a752-efdb1bb3c2f8-config-data-generated\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.526035 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-kolla-config\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.526389 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-config-data-default\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.526489 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-config-data-default\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.526518 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-config-data-default\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.527129 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e1149d0e-e93d-496a-9022-51fa77168394-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.527258 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-operator-scripts\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.527514 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-config-data-default\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.526521 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e1149d0e-e93d-496a-9022-51fa77168394-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.527608 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.527597 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-operator-scripts\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.527639 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.527643 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-kolla-config\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.527698 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhr68\" (UniqueName: \"kubernetes.io/projected/93293cee-6c86-4865-8a19-b43659a851f3-kube-api-access-mhr68\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.527776 4899 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") device mount path \"/mnt/openstack/pv04\"" pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.528533 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-kolla-config\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.528817 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-kolla-config\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.547813 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.551387 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfjwm\" (UniqueName: \"kubernetes.io/projected/9d25306a-7534-45dc-a752-efdb1bb3c2f8-kube-api-access-jfjwm\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.560440 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-2\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.575262 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhr68\" (UniqueName: \"kubernetes.io/projected/93293cee-6c86-4865-8a19-b43659a851f3-kube-api-access-mhr68\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.575552 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr6rg\" (UniqueName: \"kubernetes.io/projected/e1149d0e-e93d-496a-9022-51fa77168394-kube-api-access-lr6rg\") pod \"openstack-galera-0\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.597036 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-1\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.615900 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.635131 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:36 crc kubenswrapper[4899]: I0126 21:08:36.646480 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:37 crc kubenswrapper[4899]: I0126 21:08:37.150459 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/openstack-galera-0"] Jan 26 21:08:37 crc kubenswrapper[4899]: I0126 21:08:37.197504 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/openstack-galera-1"] Jan 26 21:08:37 crc kubenswrapper[4899]: W0126 21:08:37.201188 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93293cee_6c86_4865_8a19_b43659a851f3.slice/crio-692d96848566cec16080265b0ffe9f9e770aca53365f64b0af074ba6f31385ec WatchSource:0}: Error finding container 692d96848566cec16080265b0ffe9f9e770aca53365f64b0af074ba6f31385ec: Status 404 returned error can't find the container with id 692d96848566cec16080265b0ffe9f9e770aca53365f64b0af074ba6f31385ec Jan 26 21:08:37 crc kubenswrapper[4899]: I0126 21:08:37.212674 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/openstack-galera-2"] Jan 26 21:08:37 crc kubenswrapper[4899]: I0126 21:08:37.279907 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-1" event={"ID":"93293cee-6c86-4865-8a19-b43659a851f3","Type":"ContainerStarted","Data":"692d96848566cec16080265b0ffe9f9e770aca53365f64b0af074ba6f31385ec"} Jan 26 21:08:37 crc kubenswrapper[4899]: I0126 21:08:37.281368 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-2" event={"ID":"9d25306a-7534-45dc-a752-efdb1bb3c2f8","Type":"ContainerStarted","Data":"7de1b1491fa7bc5de8f94f26207c77597c52e54b7cf30c65de794a6ef163db52"} Jan 26 21:08:37 crc kubenswrapper[4899]: I0126 21:08:37.282440 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-0" event={"ID":"e1149d0e-e93d-496a-9022-51fa77168394","Type":"ContainerStarted","Data":"f7a8e9a3a8e33284fb060169a552506159f8e7215e03f71f587a6d53ce5f74ba"} Jan 26 21:08:41 crc kubenswrapper[4899]: I0126 21:08:41.497101 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk"] Jan 26 21:08:41 crc kubenswrapper[4899]: I0126 21:08:41.498688 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" Jan 26 21:08:41 crc kubenswrapper[4899]: I0126 21:08:41.501495 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Jan 26 21:08:41 crc kubenswrapper[4899]: I0126 21:08:41.501719 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4fgdw" Jan 26 21:08:41 crc kubenswrapper[4899]: I0126 21:08:41.512105 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk"] Jan 26 21:08:41 crc kubenswrapper[4899]: I0126 21:08:41.526600 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfvbr\" (UniqueName: \"kubernetes.io/projected/f89e50e9-8464-4607-ba9f-97e83b9f09ae-kube-api-access-cfvbr\") pod \"infra-operator-controller-manager-5789d54c4b-2jdpk\" (UID: \"f89e50e9-8464-4607-ba9f-97e83b9f09ae\") " pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" Jan 26 21:08:41 crc kubenswrapper[4899]: I0126 21:08:41.526679 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f89e50e9-8464-4607-ba9f-97e83b9f09ae-webhook-cert\") pod \"infra-operator-controller-manager-5789d54c4b-2jdpk\" (UID: \"f89e50e9-8464-4607-ba9f-97e83b9f09ae\") " pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" Jan 26 21:08:41 crc kubenswrapper[4899]: I0126 21:08:41.526705 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f89e50e9-8464-4607-ba9f-97e83b9f09ae-apiservice-cert\") pod \"infra-operator-controller-manager-5789d54c4b-2jdpk\" (UID: \"f89e50e9-8464-4607-ba9f-97e83b9f09ae\") " pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" Jan 26 21:08:41 crc kubenswrapper[4899]: I0126 21:08:41.628181 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfvbr\" (UniqueName: \"kubernetes.io/projected/f89e50e9-8464-4607-ba9f-97e83b9f09ae-kube-api-access-cfvbr\") pod \"infra-operator-controller-manager-5789d54c4b-2jdpk\" (UID: \"f89e50e9-8464-4607-ba9f-97e83b9f09ae\") " pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" Jan 26 21:08:41 crc kubenswrapper[4899]: I0126 21:08:41.628307 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f89e50e9-8464-4607-ba9f-97e83b9f09ae-webhook-cert\") pod \"infra-operator-controller-manager-5789d54c4b-2jdpk\" (UID: \"f89e50e9-8464-4607-ba9f-97e83b9f09ae\") " pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" Jan 26 21:08:41 crc kubenswrapper[4899]: I0126 21:08:41.628335 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f89e50e9-8464-4607-ba9f-97e83b9f09ae-apiservice-cert\") pod \"infra-operator-controller-manager-5789d54c4b-2jdpk\" (UID: \"f89e50e9-8464-4607-ba9f-97e83b9f09ae\") " pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" Jan 26 21:08:41 crc kubenswrapper[4899]: I0126 21:08:41.637139 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f89e50e9-8464-4607-ba9f-97e83b9f09ae-apiservice-cert\") pod \"infra-operator-controller-manager-5789d54c4b-2jdpk\" (UID: \"f89e50e9-8464-4607-ba9f-97e83b9f09ae\") " pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" Jan 26 21:08:41 crc kubenswrapper[4899]: I0126 21:08:41.648524 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfvbr\" (UniqueName: \"kubernetes.io/projected/f89e50e9-8464-4607-ba9f-97e83b9f09ae-kube-api-access-cfvbr\") pod \"infra-operator-controller-manager-5789d54c4b-2jdpk\" (UID: \"f89e50e9-8464-4607-ba9f-97e83b9f09ae\") " pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" Jan 26 21:08:41 crc kubenswrapper[4899]: I0126 21:08:41.656054 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f89e50e9-8464-4607-ba9f-97e83b9f09ae-webhook-cert\") pod \"infra-operator-controller-manager-5789d54c4b-2jdpk\" (UID: \"f89e50e9-8464-4607-ba9f-97e83b9f09ae\") " pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" Jan 26 21:08:41 crc kubenswrapper[4899]: I0126 21:08:41.821113 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" Jan 26 21:08:50 crc kubenswrapper[4899]: I0126 21:08:50.783506 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk"] Jan 26 21:08:50 crc kubenswrapper[4899]: W0126 21:08:50.788098 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf89e50e9_8464_4607_ba9f_97e83b9f09ae.slice/crio-129aa01a298f5c14a5a6bdd4cf8edf6e3c0e66bcfb5308b723e12aa3f493287b WatchSource:0}: Error finding container 129aa01a298f5c14a5a6bdd4cf8edf6e3c0e66bcfb5308b723e12aa3f493287b: Status 404 returned error can't find the container with id 129aa01a298f5c14a5a6bdd4cf8edf6e3c0e66bcfb5308b723e12aa3f493287b Jan 26 21:08:51 crc kubenswrapper[4899]: I0126 21:08:51.440076 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-1" event={"ID":"93293cee-6c86-4865-8a19-b43659a851f3","Type":"ContainerStarted","Data":"bf3d58715c052ac652ddac8306ea3709b4434b9da390c2acdc009ba362c74e67"} Jan 26 21:08:51 crc kubenswrapper[4899]: I0126 21:08:51.442587 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-2" event={"ID":"9d25306a-7534-45dc-a752-efdb1bb3c2f8","Type":"ContainerStarted","Data":"4d947da3de5046ba1caeeaad9180a7340663554ea8080e04e480cb312536cd4f"} Jan 26 21:08:51 crc kubenswrapper[4899]: I0126 21:08:51.446233 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-0" event={"ID":"e1149d0e-e93d-496a-9022-51fa77168394","Type":"ContainerStarted","Data":"5240bbe23a4b730e4418cafb12a3771a829533f3f69052df351bc927050ae35d"} Jan 26 21:08:51 crc kubenswrapper[4899]: I0126 21:08:51.448370 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" event={"ID":"f89e50e9-8464-4607-ba9f-97e83b9f09ae","Type":"ContainerStarted","Data":"129aa01a298f5c14a5a6bdd4cf8edf6e3c0e66bcfb5308b723e12aa3f493287b"} Jan 26 21:08:53 crc kubenswrapper[4899]: I0126 21:08:53.462243 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" event={"ID":"f89e50e9-8464-4607-ba9f-97e83b9f09ae","Type":"ContainerStarted","Data":"3a90cbc7dd776d4c119df39cfbf42140429ddff24b5c1eace176a432e1975f12"} Jan 26 21:08:53 crc kubenswrapper[4899]: I0126 21:08:53.462601 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" Jan 26 21:08:53 crc kubenswrapper[4899]: I0126 21:08:53.501441 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" podStartSLOduration=10.409815508 podStartE2EDuration="12.501413509s" podCreationTimestamp="2026-01-26 21:08:41 +0000 UTC" firstStartedPulling="2026-01-26 21:08:50.791536848 +0000 UTC m=+820.173124885" lastFinishedPulling="2026-01-26 21:08:52.883134849 +0000 UTC m=+822.264722886" observedRunningTime="2026-01-26 21:08:53.498418913 +0000 UTC m=+822.880006960" watchObservedRunningTime="2026-01-26 21:08:53.501413509 +0000 UTC m=+822.883001576" Jan 26 21:08:54 crc kubenswrapper[4899]: I0126 21:08:54.474289 4899 generic.go:334] "Generic (PLEG): container finished" podID="93293cee-6c86-4865-8a19-b43659a851f3" containerID="bf3d58715c052ac652ddac8306ea3709b4434b9da390c2acdc009ba362c74e67" exitCode=0 Jan 26 21:08:54 crc kubenswrapper[4899]: I0126 21:08:54.474369 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-1" event={"ID":"93293cee-6c86-4865-8a19-b43659a851f3","Type":"ContainerDied","Data":"bf3d58715c052ac652ddac8306ea3709b4434b9da390c2acdc009ba362c74e67"} Jan 26 21:08:54 crc kubenswrapper[4899]: I0126 21:08:54.477513 4899 generic.go:334] "Generic (PLEG): container finished" podID="9d25306a-7534-45dc-a752-efdb1bb3c2f8" containerID="4d947da3de5046ba1caeeaad9180a7340663554ea8080e04e480cb312536cd4f" exitCode=0 Jan 26 21:08:54 crc kubenswrapper[4899]: I0126 21:08:54.477583 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-2" event={"ID":"9d25306a-7534-45dc-a752-efdb1bb3c2f8","Type":"ContainerDied","Data":"4d947da3de5046ba1caeeaad9180a7340663554ea8080e04e480cb312536cd4f"} Jan 26 21:08:54 crc kubenswrapper[4899]: I0126 21:08:54.478622 4899 generic.go:334] "Generic (PLEG): container finished" podID="e1149d0e-e93d-496a-9022-51fa77168394" containerID="5240bbe23a4b730e4418cafb12a3771a829533f3f69052df351bc927050ae35d" exitCode=0 Jan 26 21:08:54 crc kubenswrapper[4899]: I0126 21:08:54.478656 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-0" event={"ID":"e1149d0e-e93d-496a-9022-51fa77168394","Type":"ContainerDied","Data":"5240bbe23a4b730e4418cafb12a3771a829533f3f69052df351bc927050ae35d"} Jan 26 21:08:55 crc kubenswrapper[4899]: I0126 21:08:55.487874 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-2" event={"ID":"9d25306a-7534-45dc-a752-efdb1bb3c2f8","Type":"ContainerStarted","Data":"eff235495821bd9d63f429bbdc2eb73fe6c6be35c98946da1487dc70ad4f6b43"} Jan 26 21:08:55 crc kubenswrapper[4899]: I0126 21:08:55.490162 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-0" event={"ID":"e1149d0e-e93d-496a-9022-51fa77168394","Type":"ContainerStarted","Data":"78def6462e135e1c51cf7586dd668fc67c6c03b3a74bd3086155f1b882d87166"} Jan 26 21:08:55 crc kubenswrapper[4899]: I0126 21:08:55.492483 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-1" event={"ID":"93293cee-6c86-4865-8a19-b43659a851f3","Type":"ContainerStarted","Data":"21e74311163effb47fd7a930799c5f9f926518f8e1d054fdebfd10c7b53a179b"} Jan 26 21:08:55 crc kubenswrapper[4899]: I0126 21:08:55.518146 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/openstack-galera-2" podStartSLOduration=7.38638685 podStartE2EDuration="20.518120212s" podCreationTimestamp="2026-01-26 21:08:35 +0000 UTC" firstStartedPulling="2026-01-26 21:08:37.21837349 +0000 UTC m=+806.599961527" lastFinishedPulling="2026-01-26 21:08:50.350106852 +0000 UTC m=+819.731694889" observedRunningTime="2026-01-26 21:08:55.512866211 +0000 UTC m=+824.894454268" watchObservedRunningTime="2026-01-26 21:08:55.518120212 +0000 UTC m=+824.899708289" Jan 26 21:08:55 crc kubenswrapper[4899]: I0126 21:08:55.534263 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/openstack-galera-1" podStartSLOduration=7.315838938 podStartE2EDuration="20.534245537s" podCreationTimestamp="2026-01-26 21:08:35 +0000 UTC" firstStartedPulling="2026-01-26 21:08:37.203513302 +0000 UTC m=+806.585101349" lastFinishedPulling="2026-01-26 21:08:50.421919911 +0000 UTC m=+819.803507948" observedRunningTime="2026-01-26 21:08:55.529293104 +0000 UTC m=+824.910881151" watchObservedRunningTime="2026-01-26 21:08:55.534245537 +0000 UTC m=+824.915833584" Jan 26 21:08:55 crc kubenswrapper[4899]: I0126 21:08:55.555254 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/openstack-galera-0" podStartSLOduration=7.286254066 podStartE2EDuration="20.555236222s" podCreationTimestamp="2026-01-26 21:08:35 +0000 UTC" firstStartedPulling="2026-01-26 21:08:37.159165744 +0000 UTC m=+806.540753811" lastFinishedPulling="2026-01-26 21:08:50.42814794 +0000 UTC m=+819.809735967" observedRunningTime="2026-01-26 21:08:55.548723844 +0000 UTC m=+824.930311901" watchObservedRunningTime="2026-01-26 21:08:55.555236222 +0000 UTC m=+824.936824259" Jan 26 21:08:56 crc kubenswrapper[4899]: I0126 21:08:56.616235 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:56 crc kubenswrapper[4899]: I0126 21:08:56.616585 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:08:56 crc kubenswrapper[4899]: I0126 21:08:56.636185 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:56 crc kubenswrapper[4899]: I0126 21:08:56.637400 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:08:56 crc kubenswrapper[4899]: I0126 21:08:56.647543 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:08:56 crc kubenswrapper[4899]: I0126 21:08:56.647596 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:09:01 crc kubenswrapper[4899]: I0126 21:09:01.827317 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" Jan 26 21:09:02 crc kubenswrapper[4899]: I0126 21:09:02.744004 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:09:02 crc kubenswrapper[4899]: I0126 21:09:02.823183 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:09:05 crc kubenswrapper[4899]: I0126 21:09:05.334510 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/root-account-create-update-vgsv6"] Jan 26 21:09:05 crc kubenswrapper[4899]: I0126 21:09:05.337119 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/root-account-create-update-vgsv6" Jan 26 21:09:05 crc kubenswrapper[4899]: I0126 21:09:05.340406 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 26 21:09:05 crc kubenswrapper[4899]: I0126 21:09:05.354721 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/root-account-create-update-vgsv6"] Jan 26 21:09:05 crc kubenswrapper[4899]: I0126 21:09:05.505705 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1bb4284-a142-421b-b41c-46c3b31995fa-operator-scripts\") pod \"root-account-create-update-vgsv6\" (UID: \"e1bb4284-a142-421b-b41c-46c3b31995fa\") " pod="manila-kuttl-tests/root-account-create-update-vgsv6" Jan 26 21:09:05 crc kubenswrapper[4899]: I0126 21:09:05.505773 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nrd5\" (UniqueName: \"kubernetes.io/projected/e1bb4284-a142-421b-b41c-46c3b31995fa-kube-api-access-6nrd5\") pod \"root-account-create-update-vgsv6\" (UID: \"e1bb4284-a142-421b-b41c-46c3b31995fa\") " pod="manila-kuttl-tests/root-account-create-update-vgsv6" Jan 26 21:09:05 crc kubenswrapper[4899]: I0126 21:09:05.607205 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1bb4284-a142-421b-b41c-46c3b31995fa-operator-scripts\") pod \"root-account-create-update-vgsv6\" (UID: \"e1bb4284-a142-421b-b41c-46c3b31995fa\") " pod="manila-kuttl-tests/root-account-create-update-vgsv6" Jan 26 21:09:05 crc kubenswrapper[4899]: I0126 21:09:05.607261 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nrd5\" (UniqueName: \"kubernetes.io/projected/e1bb4284-a142-421b-b41c-46c3b31995fa-kube-api-access-6nrd5\") pod \"root-account-create-update-vgsv6\" (UID: \"e1bb4284-a142-421b-b41c-46c3b31995fa\") " pod="manila-kuttl-tests/root-account-create-update-vgsv6" Jan 26 21:09:05 crc kubenswrapper[4899]: I0126 21:09:05.608356 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1bb4284-a142-421b-b41c-46c3b31995fa-operator-scripts\") pod \"root-account-create-update-vgsv6\" (UID: \"e1bb4284-a142-421b-b41c-46c3b31995fa\") " pod="manila-kuttl-tests/root-account-create-update-vgsv6" Jan 26 21:09:05 crc kubenswrapper[4899]: I0126 21:09:05.638832 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nrd5\" (UniqueName: \"kubernetes.io/projected/e1bb4284-a142-421b-b41c-46c3b31995fa-kube-api-access-6nrd5\") pod \"root-account-create-update-vgsv6\" (UID: \"e1bb4284-a142-421b-b41c-46c3b31995fa\") " pod="manila-kuttl-tests/root-account-create-update-vgsv6" Jan 26 21:09:05 crc kubenswrapper[4899]: I0126 21:09:05.660756 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/root-account-create-update-vgsv6" Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.086612 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/root-account-create-update-vgsv6"] Jan 26 21:09:06 crc kubenswrapper[4899]: W0126 21:09:06.105198 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1bb4284_a142_421b_b41c_46c3b31995fa.slice/crio-9b5e2c208c6466d3db692a3e025de830263ea957ea84de5f2ca1c2110585c440 WatchSource:0}: Error finding container 9b5e2c208c6466d3db692a3e025de830263ea957ea84de5f2ca1c2110585c440: Status 404 returned error can't find the container with id 9b5e2c208c6466d3db692a3e025de830263ea957ea84de5f2ca1c2110585c440 Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.331458 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/memcached-0"] Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.332650 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/memcached-0" Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.334664 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"memcached-memcached-dockercfg-fdbvn" Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.339750 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/memcached-0"] Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.344623 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"manila-kuttl-tests"/"memcached-config-data" Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.519918 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ea09b8ff-8868-45dc-92e5-bdee96d13107-kolla-config\") pod \"memcached-0\" (UID: \"ea09b8ff-8868-45dc-92e5-bdee96d13107\") " pod="manila-kuttl-tests/memcached-0" Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.520022 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea09b8ff-8868-45dc-92e5-bdee96d13107-config-data\") pod \"memcached-0\" (UID: \"ea09b8ff-8868-45dc-92e5-bdee96d13107\") " pod="manila-kuttl-tests/memcached-0" Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.520065 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msltd\" (UniqueName: \"kubernetes.io/projected/ea09b8ff-8868-45dc-92e5-bdee96d13107-kube-api-access-msltd\") pod \"memcached-0\" (UID: \"ea09b8ff-8868-45dc-92e5-bdee96d13107\") " pod="manila-kuttl-tests/memcached-0" Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.562844 4899 generic.go:334] "Generic (PLEG): container finished" podID="e1bb4284-a142-421b-b41c-46c3b31995fa" containerID="14875699b6f89f706d1e3913351c8304135ee0f875f0db77b66f6555212a776c" exitCode=0 Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.563506 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/root-account-create-update-vgsv6" event={"ID":"e1bb4284-a142-421b-b41c-46c3b31995fa","Type":"ContainerDied","Data":"14875699b6f89f706d1e3913351c8304135ee0f875f0db77b66f6555212a776c"} Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.563532 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/root-account-create-update-vgsv6" event={"ID":"e1bb4284-a142-421b-b41c-46c3b31995fa","Type":"ContainerStarted","Data":"9b5e2c208c6466d3db692a3e025de830263ea957ea84de5f2ca1c2110585c440"} Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.621189 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ea09b8ff-8868-45dc-92e5-bdee96d13107-kolla-config\") pod \"memcached-0\" (UID: \"ea09b8ff-8868-45dc-92e5-bdee96d13107\") " pod="manila-kuttl-tests/memcached-0" Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.621278 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea09b8ff-8868-45dc-92e5-bdee96d13107-config-data\") pod \"memcached-0\" (UID: \"ea09b8ff-8868-45dc-92e5-bdee96d13107\") " pod="manila-kuttl-tests/memcached-0" Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.621307 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msltd\" (UniqueName: \"kubernetes.io/projected/ea09b8ff-8868-45dc-92e5-bdee96d13107-kube-api-access-msltd\") pod \"memcached-0\" (UID: \"ea09b8ff-8868-45dc-92e5-bdee96d13107\") " pod="manila-kuttl-tests/memcached-0" Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.622471 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ea09b8ff-8868-45dc-92e5-bdee96d13107-kolla-config\") pod \"memcached-0\" (UID: \"ea09b8ff-8868-45dc-92e5-bdee96d13107\") " pod="manila-kuttl-tests/memcached-0" Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.622534 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea09b8ff-8868-45dc-92e5-bdee96d13107-config-data\") pod \"memcached-0\" (UID: \"ea09b8ff-8868-45dc-92e5-bdee96d13107\") " pod="manila-kuttl-tests/memcached-0" Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.640654 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msltd\" (UniqueName: \"kubernetes.io/projected/ea09b8ff-8868-45dc-92e5-bdee96d13107-kube-api-access-msltd\") pod \"memcached-0\" (UID: \"ea09b8ff-8868-45dc-92e5-bdee96d13107\") " pod="manila-kuttl-tests/memcached-0" Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.674031 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/memcached-0" Jan 26 21:09:06 crc kubenswrapper[4899]: I0126 21:09:06.945174 4899 prober.go:107] "Probe failed" probeType="Readiness" pod="manila-kuttl-tests/openstack-galera-2" podUID="9d25306a-7534-45dc-a752-efdb1bb3c2f8" containerName="galera" probeResult="failure" output=< Jan 26 21:09:06 crc kubenswrapper[4899]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Jan 26 21:09:06 crc kubenswrapper[4899]: > Jan 26 21:09:07 crc kubenswrapper[4899]: I0126 21:09:07.281618 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/memcached-0"] Jan 26 21:09:07 crc kubenswrapper[4899]: I0126 21:09:07.571959 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/memcached-0" event={"ID":"ea09b8ff-8868-45dc-92e5-bdee96d13107","Type":"ContainerStarted","Data":"5c2dfd519c081688820e0660e593165baf67c6997e536c056661613f15851205"} Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.136118 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-d2bdt"] Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.138224 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.140387 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-2zxpl" Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.142531 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-d2bdt"] Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.160225 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/root-account-create-update-vgsv6" Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.264446 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nrd5\" (UniqueName: \"kubernetes.io/projected/e1bb4284-a142-421b-b41c-46c3b31995fa-kube-api-access-6nrd5\") pod \"e1bb4284-a142-421b-b41c-46c3b31995fa\" (UID: \"e1bb4284-a142-421b-b41c-46c3b31995fa\") " Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.264746 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1bb4284-a142-421b-b41c-46c3b31995fa-operator-scripts\") pod \"e1bb4284-a142-421b-b41c-46c3b31995fa\" (UID: \"e1bb4284-a142-421b-b41c-46c3b31995fa\") " Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.265015 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncr6d\" (UniqueName: \"kubernetes.io/projected/18a84050-0343-41d2-ab82-1831b3e653d9-kube-api-access-ncr6d\") pod \"rabbitmq-cluster-operator-index-d2bdt\" (UID: \"18a84050-0343-41d2-ab82-1831b3e653d9\") " pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.266947 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1bb4284-a142-421b-b41c-46c3b31995fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1bb4284-a142-421b-b41c-46c3b31995fa" (UID: "e1bb4284-a142-421b-b41c-46c3b31995fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.275313 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1bb4284-a142-421b-b41c-46c3b31995fa-kube-api-access-6nrd5" (OuterVolumeSpecName: "kube-api-access-6nrd5") pod "e1bb4284-a142-421b-b41c-46c3b31995fa" (UID: "e1bb4284-a142-421b-b41c-46c3b31995fa"). InnerVolumeSpecName "kube-api-access-6nrd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.366038 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncr6d\" (UniqueName: \"kubernetes.io/projected/18a84050-0343-41d2-ab82-1831b3e653d9-kube-api-access-ncr6d\") pod \"rabbitmq-cluster-operator-index-d2bdt\" (UID: \"18a84050-0343-41d2-ab82-1831b3e653d9\") " pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.366135 4899 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1bb4284-a142-421b-b41c-46c3b31995fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.366150 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nrd5\" (UniqueName: \"kubernetes.io/projected/e1bb4284-a142-421b-b41c-46c3b31995fa-kube-api-access-6nrd5\") on node \"crc\" DevicePath \"\"" Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.401086 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncr6d\" (UniqueName: \"kubernetes.io/projected/18a84050-0343-41d2-ab82-1831b3e653d9-kube-api-access-ncr6d\") pod \"rabbitmq-cluster-operator-index-d2bdt\" (UID: \"18a84050-0343-41d2-ab82-1831b3e653d9\") " pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.483488 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.599814 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/root-account-create-update-vgsv6" event={"ID":"e1bb4284-a142-421b-b41c-46c3b31995fa","Type":"ContainerDied","Data":"9b5e2c208c6466d3db692a3e025de830263ea957ea84de5f2ca1c2110585c440"} Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.599858 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b5e2c208c6466d3db692a3e025de830263ea957ea84de5f2ca1c2110585c440" Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.599981 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/root-account-create-update-vgsv6" Jan 26 21:09:09 crc kubenswrapper[4899]: I0126 21:09:09.767253 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-d2bdt"] Jan 26 21:09:10 crc kubenswrapper[4899]: I0126 21:09:10.609135 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" event={"ID":"18a84050-0343-41d2-ab82-1831b3e653d9","Type":"ContainerStarted","Data":"430552626b8a5f0e429cdee781db05651be75a5668891c5b006ccd9c9976b52b"} Jan 26 21:09:13 crc kubenswrapper[4899]: E0126 21:09:13.396833 4899 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.22:40672->38.102.83.22:45343: write tcp 38.102.83.22:40672->38.102.83.22:45343: write: broken pipe Jan 26 21:09:16 crc kubenswrapper[4899]: I0126 21:09:16.722521 4899 prober.go:107] "Probe failed" probeType="Readiness" pod="manila-kuttl-tests/openstack-galera-2" podUID="9d25306a-7534-45dc-a752-efdb1bb3c2f8" containerName="galera" probeResult="failure" output=< Jan 26 21:09:16 crc kubenswrapper[4899]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Jan 26 21:09:16 crc kubenswrapper[4899]: > Jan 26 21:09:17 crc kubenswrapper[4899]: I0126 21:09:17.687596 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" event={"ID":"18a84050-0343-41d2-ab82-1831b3e653d9","Type":"ContainerStarted","Data":"2bb6b52109c55b4be7d9b86200e4e5a27888577a8fac982c6e321db06d46cc87"} Jan 26 21:09:17 crc kubenswrapper[4899]: I0126 21:09:17.689075 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/memcached-0" event={"ID":"ea09b8ff-8868-45dc-92e5-bdee96d13107","Type":"ContainerStarted","Data":"ecc13133795a9325f993bb309727c2c044412f3de64419facdc72dbd7f2cd736"} Jan 26 21:09:17 crc kubenswrapper[4899]: I0126 21:09:17.689224 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="manila-kuttl-tests/memcached-0" Jan 26 21:09:17 crc kubenswrapper[4899]: I0126 21:09:17.725625 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" podStartSLOduration=1.749719089 podStartE2EDuration="8.725608017s" podCreationTimestamp="2026-01-26 21:09:09 +0000 UTC" firstStartedPulling="2026-01-26 21:09:09.770346562 +0000 UTC m=+839.151934599" lastFinishedPulling="2026-01-26 21:09:16.74623549 +0000 UTC m=+846.127823527" observedRunningTime="2026-01-26 21:09:17.707885292 +0000 UTC m=+847.089473339" watchObservedRunningTime="2026-01-26 21:09:17.725608017 +0000 UTC m=+847.107196054" Jan 26 21:09:17 crc kubenswrapper[4899]: I0126 21:09:17.728581 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/memcached-0" podStartSLOduration=3.517441715 podStartE2EDuration="11.728571462s" podCreationTimestamp="2026-01-26 21:09:06 +0000 UTC" firstStartedPulling="2026-01-26 21:09:07.271853339 +0000 UTC m=+836.653441376" lastFinishedPulling="2026-01-26 21:09:15.482983066 +0000 UTC m=+844.864571123" observedRunningTime="2026-01-26 21:09:17.72432008 +0000 UTC m=+847.105908157" watchObservedRunningTime="2026-01-26 21:09:17.728571462 +0000 UTC m=+847.110159499" Jan 26 21:09:17 crc kubenswrapper[4899]: I0126 21:09:17.851126 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:09:17 crc kubenswrapper[4899]: I0126 21:09:17.929571 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:09:19 crc kubenswrapper[4899]: I0126 21:09:19.483776 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" Jan 26 21:09:19 crc kubenswrapper[4899]: I0126 21:09:19.484101 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" Jan 26 21:09:19 crc kubenswrapper[4899]: I0126 21:09:19.528332 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" Jan 26 21:09:19 crc kubenswrapper[4899]: I0126 21:09:19.658550 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:09:19 crc kubenswrapper[4899]: I0126 21:09:19.739375 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:09:21 crc kubenswrapper[4899]: I0126 21:09:21.676814 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="manila-kuttl-tests/memcached-0" Jan 26 21:09:29 crc kubenswrapper[4899]: I0126 21:09:29.514317 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" Jan 26 21:09:34 crc kubenswrapper[4899]: I0126 21:09:34.548614 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp"] Jan 26 21:09:34 crc kubenswrapper[4899]: E0126 21:09:34.549523 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bb4284-a142-421b-b41c-46c3b31995fa" containerName="mariadb-account-create-update" Jan 26 21:09:34 crc kubenswrapper[4899]: I0126 21:09:34.549539 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bb4284-a142-421b-b41c-46c3b31995fa" containerName="mariadb-account-create-update" Jan 26 21:09:34 crc kubenswrapper[4899]: I0126 21:09:34.549717 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bb4284-a142-421b-b41c-46c3b31995fa" containerName="mariadb-account-create-update" Jan 26 21:09:34 crc kubenswrapper[4899]: I0126 21:09:34.550844 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" Jan 26 21:09:34 crc kubenswrapper[4899]: I0126 21:09:34.556152 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-44wdn" Jan 26 21:09:34 crc kubenswrapper[4899]: I0126 21:09:34.557505 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp"] Jan 26 21:09:34 crc kubenswrapper[4899]: I0126 21:09:34.670131 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b167cf4e-88b9-485d-a032-5767edc49205-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp\" (UID: \"b167cf4e-88b9-485d-a032-5767edc49205\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" Jan 26 21:09:34 crc kubenswrapper[4899]: I0126 21:09:34.670546 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b167cf4e-88b9-485d-a032-5767edc49205-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp\" (UID: \"b167cf4e-88b9-485d-a032-5767edc49205\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" Jan 26 21:09:34 crc kubenswrapper[4899]: I0126 21:09:34.670571 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtkzb\" (UniqueName: \"kubernetes.io/projected/b167cf4e-88b9-485d-a032-5767edc49205-kube-api-access-vtkzb\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp\" (UID: \"b167cf4e-88b9-485d-a032-5767edc49205\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" Jan 26 21:09:34 crc kubenswrapper[4899]: I0126 21:09:34.771605 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b167cf4e-88b9-485d-a032-5767edc49205-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp\" (UID: \"b167cf4e-88b9-485d-a032-5767edc49205\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" Jan 26 21:09:34 crc kubenswrapper[4899]: I0126 21:09:34.771679 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b167cf4e-88b9-485d-a032-5767edc49205-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp\" (UID: \"b167cf4e-88b9-485d-a032-5767edc49205\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" Jan 26 21:09:34 crc kubenswrapper[4899]: I0126 21:09:34.771703 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtkzb\" (UniqueName: \"kubernetes.io/projected/b167cf4e-88b9-485d-a032-5767edc49205-kube-api-access-vtkzb\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp\" (UID: \"b167cf4e-88b9-485d-a032-5767edc49205\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" Jan 26 21:09:34 crc kubenswrapper[4899]: I0126 21:09:34.772222 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b167cf4e-88b9-485d-a032-5767edc49205-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp\" (UID: \"b167cf4e-88b9-485d-a032-5767edc49205\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" Jan 26 21:09:34 crc kubenswrapper[4899]: I0126 21:09:34.772265 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b167cf4e-88b9-485d-a032-5767edc49205-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp\" (UID: \"b167cf4e-88b9-485d-a032-5767edc49205\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" Jan 26 21:09:34 crc kubenswrapper[4899]: I0126 21:09:34.789839 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtkzb\" (UniqueName: \"kubernetes.io/projected/b167cf4e-88b9-485d-a032-5767edc49205-kube-api-access-vtkzb\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp\" (UID: \"b167cf4e-88b9-485d-a032-5767edc49205\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" Jan 26 21:09:34 crc kubenswrapper[4899]: I0126 21:09:34.909731 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" Jan 26 21:09:35 crc kubenswrapper[4899]: I0126 21:09:35.307879 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp"] Jan 26 21:09:35 crc kubenswrapper[4899]: I0126 21:09:35.818000 4899 generic.go:334] "Generic (PLEG): container finished" podID="b167cf4e-88b9-485d-a032-5767edc49205" containerID="a76d55c10e4b48d800c14ecf7c884466851e49b1fed31835457352172400a960" exitCode=0 Jan 26 21:09:35 crc kubenswrapper[4899]: I0126 21:09:35.818053 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" event={"ID":"b167cf4e-88b9-485d-a032-5767edc49205","Type":"ContainerDied","Data":"a76d55c10e4b48d800c14ecf7c884466851e49b1fed31835457352172400a960"} Jan 26 21:09:35 crc kubenswrapper[4899]: I0126 21:09:35.818108 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" event={"ID":"b167cf4e-88b9-485d-a032-5767edc49205","Type":"ContainerStarted","Data":"d14087bc61735ec3f332661dac55ea6d5325221f1906f876686e8743ea7eaf43"} Jan 26 21:09:36 crc kubenswrapper[4899]: I0126 21:09:36.825910 4899 generic.go:334] "Generic (PLEG): container finished" podID="b167cf4e-88b9-485d-a032-5767edc49205" containerID="5aea9d5a9207310bc145db795d8311f8723356334feaea7be3c965be90c14888" exitCode=0 Jan 26 21:09:36 crc kubenswrapper[4899]: I0126 21:09:36.825968 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" event={"ID":"b167cf4e-88b9-485d-a032-5767edc49205","Type":"ContainerDied","Data":"5aea9d5a9207310bc145db795d8311f8723356334feaea7be3c965be90c14888"} Jan 26 21:09:37 crc kubenswrapper[4899]: I0126 21:09:37.833639 4899 generic.go:334] "Generic (PLEG): container finished" podID="b167cf4e-88b9-485d-a032-5767edc49205" containerID="cd9a23f4ec7372dbf26294faa8f4a368dd88a125c276a70a2b41495672c78589" exitCode=0 Jan 26 21:09:37 crc kubenswrapper[4899]: I0126 21:09:37.833714 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" event={"ID":"b167cf4e-88b9-485d-a032-5767edc49205","Type":"ContainerDied","Data":"cd9a23f4ec7372dbf26294faa8f4a368dd88a125c276a70a2b41495672c78589"} Jan 26 21:09:39 crc kubenswrapper[4899]: I0126 21:09:39.137158 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" Jan 26 21:09:39 crc kubenswrapper[4899]: I0126 21:09:39.254267 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b167cf4e-88b9-485d-a032-5767edc49205-bundle\") pod \"b167cf4e-88b9-485d-a032-5767edc49205\" (UID: \"b167cf4e-88b9-485d-a032-5767edc49205\") " Jan 26 21:09:39 crc kubenswrapper[4899]: I0126 21:09:39.254368 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtkzb\" (UniqueName: \"kubernetes.io/projected/b167cf4e-88b9-485d-a032-5767edc49205-kube-api-access-vtkzb\") pod \"b167cf4e-88b9-485d-a032-5767edc49205\" (UID: \"b167cf4e-88b9-485d-a032-5767edc49205\") " Jan 26 21:09:39 crc kubenswrapper[4899]: I0126 21:09:39.254484 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b167cf4e-88b9-485d-a032-5767edc49205-util\") pod \"b167cf4e-88b9-485d-a032-5767edc49205\" (UID: \"b167cf4e-88b9-485d-a032-5767edc49205\") " Jan 26 21:09:39 crc kubenswrapper[4899]: I0126 21:09:39.255751 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b167cf4e-88b9-485d-a032-5767edc49205-bundle" (OuterVolumeSpecName: "bundle") pod "b167cf4e-88b9-485d-a032-5767edc49205" (UID: "b167cf4e-88b9-485d-a032-5767edc49205"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:09:39 crc kubenswrapper[4899]: I0126 21:09:39.260023 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b167cf4e-88b9-485d-a032-5767edc49205-kube-api-access-vtkzb" (OuterVolumeSpecName: "kube-api-access-vtkzb") pod "b167cf4e-88b9-485d-a032-5767edc49205" (UID: "b167cf4e-88b9-485d-a032-5767edc49205"). InnerVolumeSpecName "kube-api-access-vtkzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:09:39 crc kubenswrapper[4899]: I0126 21:09:39.268441 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b167cf4e-88b9-485d-a032-5767edc49205-util" (OuterVolumeSpecName: "util") pod "b167cf4e-88b9-485d-a032-5767edc49205" (UID: "b167cf4e-88b9-485d-a032-5767edc49205"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:09:39 crc kubenswrapper[4899]: I0126 21:09:39.355669 4899 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b167cf4e-88b9-485d-a032-5767edc49205-util\") on node \"crc\" DevicePath \"\"" Jan 26 21:09:39 crc kubenswrapper[4899]: I0126 21:09:39.355710 4899 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b167cf4e-88b9-485d-a032-5767edc49205-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 21:09:39 crc kubenswrapper[4899]: I0126 21:09:39.355727 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtkzb\" (UniqueName: \"kubernetes.io/projected/b167cf4e-88b9-485d-a032-5767edc49205-kube-api-access-vtkzb\") on node \"crc\" DevicePath \"\"" Jan 26 21:09:39 crc kubenswrapper[4899]: I0126 21:09:39.851012 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" event={"ID":"b167cf4e-88b9-485d-a032-5767edc49205","Type":"ContainerDied","Data":"d14087bc61735ec3f332661dac55ea6d5325221f1906f876686e8743ea7eaf43"} Jan 26 21:09:39 crc kubenswrapper[4899]: I0126 21:09:39.851072 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d14087bc61735ec3f332661dac55ea6d5325221f1906f876686e8743ea7eaf43" Jan 26 21:09:39 crc kubenswrapper[4899]: I0126 21:09:39.851212 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp" Jan 26 21:09:47 crc kubenswrapper[4899]: I0126 21:09:47.683415 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx"] Jan 26 21:09:47 crc kubenswrapper[4899]: E0126 21:09:47.684215 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b167cf4e-88b9-485d-a032-5767edc49205" containerName="util" Jan 26 21:09:47 crc kubenswrapper[4899]: I0126 21:09:47.684229 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="b167cf4e-88b9-485d-a032-5767edc49205" containerName="util" Jan 26 21:09:47 crc kubenswrapper[4899]: E0126 21:09:47.684243 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b167cf4e-88b9-485d-a032-5767edc49205" containerName="pull" Jan 26 21:09:47 crc kubenswrapper[4899]: I0126 21:09:47.684248 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="b167cf4e-88b9-485d-a032-5767edc49205" containerName="pull" Jan 26 21:09:47 crc kubenswrapper[4899]: E0126 21:09:47.684264 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b167cf4e-88b9-485d-a032-5767edc49205" containerName="extract" Jan 26 21:09:47 crc kubenswrapper[4899]: I0126 21:09:47.684271 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="b167cf4e-88b9-485d-a032-5767edc49205" containerName="extract" Jan 26 21:09:47 crc kubenswrapper[4899]: I0126 21:09:47.684380 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="b167cf4e-88b9-485d-a032-5767edc49205" containerName="extract" Jan 26 21:09:47 crc kubenswrapper[4899]: I0126 21:09:47.684792 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx" Jan 26 21:09:47 crc kubenswrapper[4899]: I0126 21:09:47.689685 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-tqm2v" Jan 26 21:09:47 crc kubenswrapper[4899]: I0126 21:09:47.707752 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx"] Jan 26 21:09:47 crc kubenswrapper[4899]: I0126 21:09:47.786762 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tbcz\" (UniqueName: \"kubernetes.io/projected/ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0-kube-api-access-7tbcz\") pod \"rabbitmq-cluster-operator-779fc9694b-j6sdx\" (UID: \"ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx" Jan 26 21:09:47 crc kubenswrapper[4899]: I0126 21:09:47.888366 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tbcz\" (UniqueName: \"kubernetes.io/projected/ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0-kube-api-access-7tbcz\") pod \"rabbitmq-cluster-operator-779fc9694b-j6sdx\" (UID: \"ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx" Jan 26 21:09:47 crc kubenswrapper[4899]: I0126 21:09:47.909696 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tbcz\" (UniqueName: \"kubernetes.io/projected/ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0-kube-api-access-7tbcz\") pod \"rabbitmq-cluster-operator-779fc9694b-j6sdx\" (UID: \"ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx" Jan 26 21:09:48 crc kubenswrapper[4899]: I0126 21:09:48.005482 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx" Jan 26 21:09:48 crc kubenswrapper[4899]: I0126 21:09:48.210382 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx"] Jan 26 21:09:48 crc kubenswrapper[4899]: I0126 21:09:48.907694 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx" event={"ID":"ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0","Type":"ContainerStarted","Data":"379339beac979adcfc93551657239aaaee4a20e5aae06c3248d23fcf819a4df7"} Jan 26 21:09:52 crc kubenswrapper[4899]: I0126 21:09:52.951598 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx" event={"ID":"ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0","Type":"ContainerStarted","Data":"7bfb40871c77b67283f7d1cd0c5ee1be8ac01a06600c4f448a23b9ca43f2b486"} Jan 26 21:09:52 crc kubenswrapper[4899]: I0126 21:09:52.969999 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx" podStartSLOduration=1.552236541 podStartE2EDuration="5.96998053s" podCreationTimestamp="2026-01-26 21:09:47 +0000 UTC" firstStartedPulling="2026-01-26 21:09:48.225533456 +0000 UTC m=+877.607121493" lastFinishedPulling="2026-01-26 21:09:52.643277445 +0000 UTC m=+882.024865482" observedRunningTime="2026-01-26 21:09:52.967390106 +0000 UTC m=+882.348978163" watchObservedRunningTime="2026-01-26 21:09:52.96998053 +0000 UTC m=+882.351568577" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.527958 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/rabbitmq-server-0"] Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.529499 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.533966 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"manila-kuttl-tests"/"rabbitmq-plugins-conf" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.534219 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"rabbitmq-erlang-cookie" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.534959 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"rabbitmq-server-dockercfg-nhskn" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.536500 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"manila-kuttl-tests"/"rabbitmq-server-conf" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.536647 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"rabbitmq-default-user" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.544016 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/rabbitmq-server-0"] Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.681126 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b49daa9-f343-4c81-88d5-ded2e08582aa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.681397 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b49daa9-f343-4c81-88d5-ded2e08582aa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.681441 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b49daa9-f343-4c81-88d5-ded2e08582aa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.681459 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.681483 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.681512 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwqlv\" (UniqueName: \"kubernetes.io/projected/4b49daa9-f343-4c81-88d5-ded2e08582aa-kube-api-access-jwqlv\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.681743 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.681819 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.783723 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.783805 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwqlv\" (UniqueName: \"kubernetes.io/projected/4b49daa9-f343-4c81-88d5-ded2e08582aa-kube-api-access-jwqlv\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.783846 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.783867 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.783908 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b49daa9-f343-4c81-88d5-ded2e08582aa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.783941 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b49daa9-f343-4c81-88d5-ded2e08582aa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.783977 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b49daa9-f343-4c81-88d5-ded2e08582aa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.783993 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.784369 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.784421 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.785374 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b49daa9-f343-4c81-88d5-ded2e08582aa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.786906 4899 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.786948 4899 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/99b008a85cc3efee0a5d7bfd06ce98f6249a9d1eadbec94e24aa2b888f01ac76/globalmount\"" pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.789817 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b49daa9-f343-4c81-88d5-ded2e08582aa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.789879 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.790741 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b49daa9-f343-4c81-88d5-ded2e08582aa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.806690 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwqlv\" (UniqueName: \"kubernetes.io/projected/4b49daa9-f343-4c81-88d5-ded2e08582aa-kube-api-access-jwqlv\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.820015 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2\") pod \"rabbitmq-server-0\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:09:59 crc kubenswrapper[4899]: I0126 21:09:59.863402 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:10:00 crc kubenswrapper[4899]: I0126 21:10:00.221455 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/rabbitmq-server-0"] Jan 26 21:10:01 crc kubenswrapper[4899]: I0126 21:10:01.000391 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/rabbitmq-server-0" event={"ID":"4b49daa9-f343-4c81-88d5-ded2e08582aa","Type":"ContainerStarted","Data":"4450cbcddea0853234b20d9596d3e9c57a9acc03033c04a9e100511d79ae9584"} Jan 26 21:10:01 crc kubenswrapper[4899]: I0126 21:10:01.324831 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-4tsbp"] Jan 26 21:10:01 crc kubenswrapper[4899]: I0126 21:10:01.326416 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-4tsbp" Jan 26 21:10:01 crc kubenswrapper[4899]: I0126 21:10:01.331012 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-4tsbp"] Jan 26 21:10:01 crc kubenswrapper[4899]: I0126 21:10:01.333381 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-cnjdb" Jan 26 21:10:01 crc kubenswrapper[4899]: I0126 21:10:01.412725 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wgxh\" (UniqueName: \"kubernetes.io/projected/ba72f737-1c99-4652-b573-d3a6b5c5a191-kube-api-access-2wgxh\") pod \"keystone-operator-index-4tsbp\" (UID: \"ba72f737-1c99-4652-b573-d3a6b5c5a191\") " pod="openstack-operators/keystone-operator-index-4tsbp" Jan 26 21:10:01 crc kubenswrapper[4899]: I0126 21:10:01.513709 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wgxh\" (UniqueName: \"kubernetes.io/projected/ba72f737-1c99-4652-b573-d3a6b5c5a191-kube-api-access-2wgxh\") pod \"keystone-operator-index-4tsbp\" (UID: \"ba72f737-1c99-4652-b573-d3a6b5c5a191\") " pod="openstack-operators/keystone-operator-index-4tsbp" Jan 26 21:10:01 crc kubenswrapper[4899]: I0126 21:10:01.542549 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wgxh\" (UniqueName: \"kubernetes.io/projected/ba72f737-1c99-4652-b573-d3a6b5c5a191-kube-api-access-2wgxh\") pod \"keystone-operator-index-4tsbp\" (UID: \"ba72f737-1c99-4652-b573-d3a6b5c5a191\") " pod="openstack-operators/keystone-operator-index-4tsbp" Jan 26 21:10:01 crc kubenswrapper[4899]: I0126 21:10:01.643879 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-4tsbp" Jan 26 21:10:02 crc kubenswrapper[4899]: I0126 21:10:02.015749 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-4tsbp"] Jan 26 21:10:02 crc kubenswrapper[4899]: W0126 21:10:02.034817 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba72f737_1c99_4652_b573_d3a6b5c5a191.slice/crio-cad2ec4b4997fe5a43fd3f878a8e0fdac74a5f7abcc81cbcadda0d0c08aa1195 WatchSource:0}: Error finding container cad2ec4b4997fe5a43fd3f878a8e0fdac74a5f7abcc81cbcadda0d0c08aa1195: Status 404 returned error can't find the container with id cad2ec4b4997fe5a43fd3f878a8e0fdac74a5f7abcc81cbcadda0d0c08aa1195 Jan 26 21:10:03 crc kubenswrapper[4899]: I0126 21:10:03.040546 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-4tsbp" event={"ID":"ba72f737-1c99-4652-b573-d3a6b5c5a191","Type":"ContainerStarted","Data":"cad2ec4b4997fe5a43fd3f878a8e0fdac74a5f7abcc81cbcadda0d0c08aa1195"} Jan 26 21:10:05 crc kubenswrapper[4899]: I0126 21:10:05.055917 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-4tsbp" event={"ID":"ba72f737-1c99-4652-b573-d3a6b5c5a191","Type":"ContainerStarted","Data":"b7878d9ad5edaf7ed5fea2f15a9fc37a54ab97fccf1c0addc0723fcd611f9701"} Jan 26 21:10:05 crc kubenswrapper[4899]: I0126 21:10:05.075523 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-4tsbp" podStartSLOduration=1.621674099 podStartE2EDuration="4.075488285s" podCreationTimestamp="2026-01-26 21:10:01 +0000 UTC" firstStartedPulling="2026-01-26 21:10:02.040807983 +0000 UTC m=+891.422396020" lastFinishedPulling="2026-01-26 21:10:04.494622169 +0000 UTC m=+893.876210206" observedRunningTime="2026-01-26 21:10:05.069179896 +0000 UTC m=+894.450767933" watchObservedRunningTime="2026-01-26 21:10:05.075488285 +0000 UTC m=+894.457076322" Jan 26 21:10:10 crc kubenswrapper[4899]: I0126 21:10:10.087554 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/rabbitmq-server-0" event={"ID":"4b49daa9-f343-4c81-88d5-ded2e08582aa","Type":"ContainerStarted","Data":"a48469d1340f1c0375b06f14a6cd46d0511d8468c0bf91879d9ed513e1852c15"} Jan 26 21:10:11 crc kubenswrapper[4899]: I0126 21:10:11.644575 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-4tsbp" Jan 26 21:10:11 crc kubenswrapper[4899]: I0126 21:10:11.645187 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-4tsbp" Jan 26 21:10:11 crc kubenswrapper[4899]: I0126 21:10:11.685508 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-4tsbp" Jan 26 21:10:12 crc kubenswrapper[4899]: I0126 21:10:12.130446 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-4tsbp" Jan 26 21:10:13 crc kubenswrapper[4899]: I0126 21:10:13.758783 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc"] Jan 26 21:10:13 crc kubenswrapper[4899]: I0126 21:10:13.760022 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" Jan 26 21:10:13 crc kubenswrapper[4899]: I0126 21:10:13.762533 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-44wdn" Jan 26 21:10:13 crc kubenswrapper[4899]: I0126 21:10:13.775485 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc"] Jan 26 21:10:13 crc kubenswrapper[4899]: I0126 21:10:13.921858 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b537c2b0-ed88-404b-89ab-3259ac07f08e-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc\" (UID: \"b537c2b0-ed88-404b-89ab-3259ac07f08e\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" Jan 26 21:10:13 crc kubenswrapper[4899]: I0126 21:10:13.922016 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b537c2b0-ed88-404b-89ab-3259ac07f08e-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc\" (UID: \"b537c2b0-ed88-404b-89ab-3259ac07f08e\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" Jan 26 21:10:13 crc kubenswrapper[4899]: I0126 21:10:13.922122 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmkjl\" (UniqueName: \"kubernetes.io/projected/b537c2b0-ed88-404b-89ab-3259ac07f08e-kube-api-access-tmkjl\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc\" (UID: \"b537c2b0-ed88-404b-89ab-3259ac07f08e\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" Jan 26 21:10:14 crc kubenswrapper[4899]: I0126 21:10:14.023323 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b537c2b0-ed88-404b-89ab-3259ac07f08e-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc\" (UID: \"b537c2b0-ed88-404b-89ab-3259ac07f08e\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" Jan 26 21:10:14 crc kubenswrapper[4899]: I0126 21:10:14.023400 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmkjl\" (UniqueName: \"kubernetes.io/projected/b537c2b0-ed88-404b-89ab-3259ac07f08e-kube-api-access-tmkjl\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc\" (UID: \"b537c2b0-ed88-404b-89ab-3259ac07f08e\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" Jan 26 21:10:14 crc kubenswrapper[4899]: I0126 21:10:14.023439 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b537c2b0-ed88-404b-89ab-3259ac07f08e-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc\" (UID: \"b537c2b0-ed88-404b-89ab-3259ac07f08e\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" Jan 26 21:10:14 crc kubenswrapper[4899]: I0126 21:10:14.023794 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b537c2b0-ed88-404b-89ab-3259ac07f08e-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc\" (UID: \"b537c2b0-ed88-404b-89ab-3259ac07f08e\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" Jan 26 21:10:14 crc kubenswrapper[4899]: I0126 21:10:14.023877 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b537c2b0-ed88-404b-89ab-3259ac07f08e-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc\" (UID: \"b537c2b0-ed88-404b-89ab-3259ac07f08e\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" Jan 26 21:10:14 crc kubenswrapper[4899]: I0126 21:10:14.042564 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmkjl\" (UniqueName: \"kubernetes.io/projected/b537c2b0-ed88-404b-89ab-3259ac07f08e-kube-api-access-tmkjl\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc\" (UID: \"b537c2b0-ed88-404b-89ab-3259ac07f08e\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" Jan 26 21:10:14 crc kubenswrapper[4899]: I0126 21:10:14.091408 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" Jan 26 21:10:14 crc kubenswrapper[4899]: I0126 21:10:14.541202 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc"] Jan 26 21:10:15 crc kubenswrapper[4899]: I0126 21:10:15.121826 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" event={"ID":"b537c2b0-ed88-404b-89ab-3259ac07f08e","Type":"ContainerStarted","Data":"85b1110da9dea7a412ab95d95d4c4d786203bba00728cd9048baa6b288ac7d79"} Jan 26 21:10:16 crc kubenswrapper[4899]: I0126 21:10:16.132681 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" event={"ID":"b537c2b0-ed88-404b-89ab-3259ac07f08e","Type":"ContainerStarted","Data":"5a4b7b12abaf313fbd20db307215ff14307bcbe06e080a43181b829ef0feb5e7"} Jan 26 21:10:17 crc kubenswrapper[4899]: I0126 21:10:17.142857 4899 generic.go:334] "Generic (PLEG): container finished" podID="b537c2b0-ed88-404b-89ab-3259ac07f08e" containerID="5a4b7b12abaf313fbd20db307215ff14307bcbe06e080a43181b829ef0feb5e7" exitCode=0 Jan 26 21:10:17 crc kubenswrapper[4899]: I0126 21:10:17.142901 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" event={"ID":"b537c2b0-ed88-404b-89ab-3259ac07f08e","Type":"ContainerDied","Data":"5a4b7b12abaf313fbd20db307215ff14307bcbe06e080a43181b829ef0feb5e7"} Jan 26 21:10:18 crc kubenswrapper[4899]: I0126 21:10:18.152105 4899 generic.go:334] "Generic (PLEG): container finished" podID="b537c2b0-ed88-404b-89ab-3259ac07f08e" containerID="b163ea38606c5443d2fbae9278cfe6c8b71e2ad920fc9b653db3620cb4031072" exitCode=0 Jan 26 21:10:18 crc kubenswrapper[4899]: I0126 21:10:18.152347 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" event={"ID":"b537c2b0-ed88-404b-89ab-3259ac07f08e","Type":"ContainerDied","Data":"b163ea38606c5443d2fbae9278cfe6c8b71e2ad920fc9b653db3620cb4031072"} Jan 26 21:10:19 crc kubenswrapper[4899]: I0126 21:10:19.160671 4899 generic.go:334] "Generic (PLEG): container finished" podID="b537c2b0-ed88-404b-89ab-3259ac07f08e" containerID="0949c2a521d3a4b80a574ccf22e3111d270f6a139c1b0ec5e6a568969dd7cfa8" exitCode=0 Jan 26 21:10:19 crc kubenswrapper[4899]: I0126 21:10:19.160720 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" event={"ID":"b537c2b0-ed88-404b-89ab-3259ac07f08e","Type":"ContainerDied","Data":"0949c2a521d3a4b80a574ccf22e3111d270f6a139c1b0ec5e6a568969dd7cfa8"} Jan 26 21:10:20 crc kubenswrapper[4899]: I0126 21:10:20.443355 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" Jan 26 21:10:20 crc kubenswrapper[4899]: I0126 21:10:20.517155 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmkjl\" (UniqueName: \"kubernetes.io/projected/b537c2b0-ed88-404b-89ab-3259ac07f08e-kube-api-access-tmkjl\") pod \"b537c2b0-ed88-404b-89ab-3259ac07f08e\" (UID: \"b537c2b0-ed88-404b-89ab-3259ac07f08e\") " Jan 26 21:10:20 crc kubenswrapper[4899]: I0126 21:10:20.517238 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b537c2b0-ed88-404b-89ab-3259ac07f08e-bundle\") pod \"b537c2b0-ed88-404b-89ab-3259ac07f08e\" (UID: \"b537c2b0-ed88-404b-89ab-3259ac07f08e\") " Jan 26 21:10:20 crc kubenswrapper[4899]: I0126 21:10:20.517322 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b537c2b0-ed88-404b-89ab-3259ac07f08e-util\") pod \"b537c2b0-ed88-404b-89ab-3259ac07f08e\" (UID: \"b537c2b0-ed88-404b-89ab-3259ac07f08e\") " Jan 26 21:10:20 crc kubenswrapper[4899]: I0126 21:10:20.518584 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b537c2b0-ed88-404b-89ab-3259ac07f08e-bundle" (OuterVolumeSpecName: "bundle") pod "b537c2b0-ed88-404b-89ab-3259ac07f08e" (UID: "b537c2b0-ed88-404b-89ab-3259ac07f08e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:10:20 crc kubenswrapper[4899]: I0126 21:10:20.523142 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b537c2b0-ed88-404b-89ab-3259ac07f08e-kube-api-access-tmkjl" (OuterVolumeSpecName: "kube-api-access-tmkjl") pod "b537c2b0-ed88-404b-89ab-3259ac07f08e" (UID: "b537c2b0-ed88-404b-89ab-3259ac07f08e"). InnerVolumeSpecName "kube-api-access-tmkjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:10:20 crc kubenswrapper[4899]: I0126 21:10:20.532989 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b537c2b0-ed88-404b-89ab-3259ac07f08e-util" (OuterVolumeSpecName: "util") pod "b537c2b0-ed88-404b-89ab-3259ac07f08e" (UID: "b537c2b0-ed88-404b-89ab-3259ac07f08e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:10:20 crc kubenswrapper[4899]: I0126 21:10:20.619145 4899 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b537c2b0-ed88-404b-89ab-3259ac07f08e-util\") on node \"crc\" DevicePath \"\"" Jan 26 21:10:20 crc kubenswrapper[4899]: I0126 21:10:20.619178 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmkjl\" (UniqueName: \"kubernetes.io/projected/b537c2b0-ed88-404b-89ab-3259ac07f08e-kube-api-access-tmkjl\") on node \"crc\" DevicePath \"\"" Jan 26 21:10:20 crc kubenswrapper[4899]: I0126 21:10:20.619189 4899 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b537c2b0-ed88-404b-89ab-3259ac07f08e-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 21:10:21 crc kubenswrapper[4899]: I0126 21:10:21.175496 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" event={"ID":"b537c2b0-ed88-404b-89ab-3259ac07f08e","Type":"ContainerDied","Data":"85b1110da9dea7a412ab95d95d4c4d786203bba00728cd9048baa6b288ac7d79"} Jan 26 21:10:21 crc kubenswrapper[4899]: I0126 21:10:21.175537 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85b1110da9dea7a412ab95d95d4c4d786203bba00728cd9048baa6b288ac7d79" Jan 26 21:10:21 crc kubenswrapper[4899]: I0126 21:10:21.175639 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc" Jan 26 21:10:30 crc kubenswrapper[4899]: I0126 21:10:30.109237 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:10:30 crc kubenswrapper[4899]: I0126 21:10:30.109829 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:10:30 crc kubenswrapper[4899]: I0126 21:10:30.987331 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52"] Jan 26 21:10:30 crc kubenswrapper[4899]: E0126 21:10:30.988186 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b537c2b0-ed88-404b-89ab-3259ac07f08e" containerName="util" Jan 26 21:10:30 crc kubenswrapper[4899]: I0126 21:10:30.988202 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="b537c2b0-ed88-404b-89ab-3259ac07f08e" containerName="util" Jan 26 21:10:30 crc kubenswrapper[4899]: E0126 21:10:30.988219 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b537c2b0-ed88-404b-89ab-3259ac07f08e" containerName="pull" Jan 26 21:10:30 crc kubenswrapper[4899]: I0126 21:10:30.988227 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="b537c2b0-ed88-404b-89ab-3259ac07f08e" containerName="pull" Jan 26 21:10:30 crc kubenswrapper[4899]: E0126 21:10:30.988250 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b537c2b0-ed88-404b-89ab-3259ac07f08e" containerName="extract" Jan 26 21:10:30 crc kubenswrapper[4899]: I0126 21:10:30.988258 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="b537c2b0-ed88-404b-89ab-3259ac07f08e" containerName="extract" Jan 26 21:10:30 crc kubenswrapper[4899]: I0126 21:10:30.988412 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="b537c2b0-ed88-404b-89ab-3259ac07f08e" containerName="extract" Jan 26 21:10:30 crc kubenswrapper[4899]: I0126 21:10:30.988985 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" Jan 26 21:10:30 crc kubenswrapper[4899]: I0126 21:10:30.991281 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Jan 26 21:10:30 crc kubenswrapper[4899]: I0126 21:10:30.991396 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-m4sc9" Jan 26 21:10:31 crc kubenswrapper[4899]: I0126 21:10:31.007939 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52"] Jan 26 21:10:31 crc kubenswrapper[4899]: I0126 21:10:31.068399 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-webhook-cert\") pod \"keystone-operator-controller-manager-77c4c5f769-kdd52\" (UID: \"e0134143-cc77-4e5e-8ae8-1e431f6e32bc\") " pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" Jan 26 21:10:31 crc kubenswrapper[4899]: I0126 21:10:31.068547 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-apiservice-cert\") pod \"keystone-operator-controller-manager-77c4c5f769-kdd52\" (UID: \"e0134143-cc77-4e5e-8ae8-1e431f6e32bc\") " pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" Jan 26 21:10:31 crc kubenswrapper[4899]: I0126 21:10:31.068574 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ljjf\" (UniqueName: \"kubernetes.io/projected/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-kube-api-access-8ljjf\") pod \"keystone-operator-controller-manager-77c4c5f769-kdd52\" (UID: \"e0134143-cc77-4e5e-8ae8-1e431f6e32bc\") " pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" Jan 26 21:10:31 crc kubenswrapper[4899]: I0126 21:10:31.170129 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-apiservice-cert\") pod \"keystone-operator-controller-manager-77c4c5f769-kdd52\" (UID: \"e0134143-cc77-4e5e-8ae8-1e431f6e32bc\") " pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" Jan 26 21:10:31 crc kubenswrapper[4899]: I0126 21:10:31.170174 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ljjf\" (UniqueName: \"kubernetes.io/projected/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-kube-api-access-8ljjf\") pod \"keystone-operator-controller-manager-77c4c5f769-kdd52\" (UID: \"e0134143-cc77-4e5e-8ae8-1e431f6e32bc\") " pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" Jan 26 21:10:31 crc kubenswrapper[4899]: I0126 21:10:31.170234 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-webhook-cert\") pod \"keystone-operator-controller-manager-77c4c5f769-kdd52\" (UID: \"e0134143-cc77-4e5e-8ae8-1e431f6e32bc\") " pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" Jan 26 21:10:31 crc kubenswrapper[4899]: I0126 21:10:31.177724 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-apiservice-cert\") pod \"keystone-operator-controller-manager-77c4c5f769-kdd52\" (UID: \"e0134143-cc77-4e5e-8ae8-1e431f6e32bc\") " pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" Jan 26 21:10:31 crc kubenswrapper[4899]: I0126 21:10:31.186279 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-webhook-cert\") pod \"keystone-operator-controller-manager-77c4c5f769-kdd52\" (UID: \"e0134143-cc77-4e5e-8ae8-1e431f6e32bc\") " pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" Jan 26 21:10:31 crc kubenswrapper[4899]: I0126 21:10:31.193682 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ljjf\" (UniqueName: \"kubernetes.io/projected/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-kube-api-access-8ljjf\") pod \"keystone-operator-controller-manager-77c4c5f769-kdd52\" (UID: \"e0134143-cc77-4e5e-8ae8-1e431f6e32bc\") " pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" Jan 26 21:10:31 crc kubenswrapper[4899]: I0126 21:10:31.306193 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" Jan 26 21:10:32 crc kubenswrapper[4899]: I0126 21:10:32.278521 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52"] Jan 26 21:10:33 crc kubenswrapper[4899]: I0126 21:10:33.254597 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" event={"ID":"e0134143-cc77-4e5e-8ae8-1e431f6e32bc","Type":"ContainerStarted","Data":"88cf8ac7d4ea160dd883ecaf376bb7ca93df513bc769f2353901d37d1f561591"} Jan 26 21:10:45 crc kubenswrapper[4899]: I0126 21:10:45.348820 4899 generic.go:334] "Generic (PLEG): container finished" podID="4b49daa9-f343-4c81-88d5-ded2e08582aa" containerID="a48469d1340f1c0375b06f14a6cd46d0511d8468c0bf91879d9ed513e1852c15" exitCode=0 Jan 26 21:10:45 crc kubenswrapper[4899]: I0126 21:10:45.348901 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/rabbitmq-server-0" event={"ID":"4b49daa9-f343-4c81-88d5-ded2e08582aa","Type":"ContainerDied","Data":"a48469d1340f1c0375b06f14a6cd46d0511d8468c0bf91879d9ed513e1852c15"} Jan 26 21:10:51 crc kubenswrapper[4899]: E0126 21:10:51.104785 4899 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 26 21:10:51 crc kubenswrapper[4899]: E0126 21:10:51.105524 4899 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 --webhook-cert-path=/tmp/k8s-webhook-server/serving-certs],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:webhook-server,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:keystone-operator.v0.0.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ljjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-77c4c5f769-kdd52_openstack-operators(e0134143-cc77-4e5e-8ae8-1e431f6e32bc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 21:10:51 crc kubenswrapper[4899]: E0126 21:10:51.106864 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" podUID="e0134143-cc77-4e5e-8ae8-1e431f6e32bc" Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.123291 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vrbxd"] Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.125379 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.131177 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrbxd"] Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.301146 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05904be8-bef3-4082-b526-37fdf20ff892-catalog-content\") pod \"redhat-marketplace-vrbxd\" (UID: \"05904be8-bef3-4082-b526-37fdf20ff892\") " pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.301239 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05904be8-bef3-4082-b526-37fdf20ff892-utilities\") pod \"redhat-marketplace-vrbxd\" (UID: \"05904be8-bef3-4082-b526-37fdf20ff892\") " pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.301286 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgtzx\" (UniqueName: \"kubernetes.io/projected/05904be8-bef3-4082-b526-37fdf20ff892-kube-api-access-wgtzx\") pod \"redhat-marketplace-vrbxd\" (UID: \"05904be8-bef3-4082-b526-37fdf20ff892\") " pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.388748 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/rabbitmq-server-0" event={"ID":"4b49daa9-f343-4c81-88d5-ded2e08582aa","Type":"ContainerStarted","Data":"025b7b31afb175c9f65e2849f79b843a111ca2f27eb2e8031a9db8ccc33df643"} Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.389297 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:10:51 crc kubenswrapper[4899]: E0126 21:10:51.390393 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" podUID="e0134143-cc77-4e5e-8ae8-1e431f6e32bc" Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.402470 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05904be8-bef3-4082-b526-37fdf20ff892-catalog-content\") pod \"redhat-marketplace-vrbxd\" (UID: \"05904be8-bef3-4082-b526-37fdf20ff892\") " pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.402540 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05904be8-bef3-4082-b526-37fdf20ff892-utilities\") pod \"redhat-marketplace-vrbxd\" (UID: \"05904be8-bef3-4082-b526-37fdf20ff892\") " pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.402584 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgtzx\" (UniqueName: \"kubernetes.io/projected/05904be8-bef3-4082-b526-37fdf20ff892-kube-api-access-wgtzx\") pod \"redhat-marketplace-vrbxd\" (UID: \"05904be8-bef3-4082-b526-37fdf20ff892\") " pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.403049 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05904be8-bef3-4082-b526-37fdf20ff892-catalog-content\") pod \"redhat-marketplace-vrbxd\" (UID: \"05904be8-bef3-4082-b526-37fdf20ff892\") " pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.403143 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05904be8-bef3-4082-b526-37fdf20ff892-utilities\") pod \"redhat-marketplace-vrbxd\" (UID: \"05904be8-bef3-4082-b526-37fdf20ff892\") " pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.424859 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgtzx\" (UniqueName: \"kubernetes.io/projected/05904be8-bef3-4082-b526-37fdf20ff892-kube-api-access-wgtzx\") pod \"redhat-marketplace-vrbxd\" (UID: \"05904be8-bef3-4082-b526-37fdf20ff892\") " pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.454390 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/rabbitmq-server-0" podStartSLOduration=44.557753794 podStartE2EDuration="53.454367844s" podCreationTimestamp="2026-01-26 21:09:58 +0000 UTC" firstStartedPulling="2026-01-26 21:10:00.230829295 +0000 UTC m=+889.612417332" lastFinishedPulling="2026-01-26 21:10:09.127443355 +0000 UTC m=+898.509031382" observedRunningTime="2026-01-26 21:10:51.448325071 +0000 UTC m=+940.829913108" watchObservedRunningTime="2026-01-26 21:10:51.454367844 +0000 UTC m=+940.835955881" Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.469172 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:10:51 crc kubenswrapper[4899]: I0126 21:10:51.958814 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrbxd"] Jan 26 21:10:51 crc kubenswrapper[4899]: W0126 21:10:51.963004 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05904be8_bef3_4082_b526_37fdf20ff892.slice/crio-5c33c3b59b03fef85049152ee7ad7d14a746abf3886dfa28fb3ba7cf08036cec WatchSource:0}: Error finding container 5c33c3b59b03fef85049152ee7ad7d14a746abf3886dfa28fb3ba7cf08036cec: Status 404 returned error can't find the container with id 5c33c3b59b03fef85049152ee7ad7d14a746abf3886dfa28fb3ba7cf08036cec Jan 26 21:10:52 crc kubenswrapper[4899]: I0126 21:10:52.397070 4899 generic.go:334] "Generic (PLEG): container finished" podID="05904be8-bef3-4082-b526-37fdf20ff892" containerID="1c0e06dfff822a10105c49065ec67bb4f43550050e34b10ccf5319c43c0e61bf" exitCode=0 Jan 26 21:10:52 crc kubenswrapper[4899]: I0126 21:10:52.398459 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrbxd" event={"ID":"05904be8-bef3-4082-b526-37fdf20ff892","Type":"ContainerDied","Data":"1c0e06dfff822a10105c49065ec67bb4f43550050e34b10ccf5319c43c0e61bf"} Jan 26 21:10:52 crc kubenswrapper[4899]: I0126 21:10:52.398694 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrbxd" event={"ID":"05904be8-bef3-4082-b526-37fdf20ff892","Type":"ContainerStarted","Data":"5c33c3b59b03fef85049152ee7ad7d14a746abf3886dfa28fb3ba7cf08036cec"} Jan 26 21:10:52 crc kubenswrapper[4899]: I0126 21:10:52.399667 4899 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 21:10:54 crc kubenswrapper[4899]: I0126 21:10:54.409943 4899 generic.go:334] "Generic (PLEG): container finished" podID="05904be8-bef3-4082-b526-37fdf20ff892" containerID="ef754fa738654d163161e5de7036e14c0a50cf4f9a0a6ef6929a8f527c4c24bc" exitCode=0 Jan 26 21:10:54 crc kubenswrapper[4899]: I0126 21:10:54.410008 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrbxd" event={"ID":"05904be8-bef3-4082-b526-37fdf20ff892","Type":"ContainerDied","Data":"ef754fa738654d163161e5de7036e14c0a50cf4f9a0a6ef6929a8f527c4c24bc"} Jan 26 21:10:55 crc kubenswrapper[4899]: I0126 21:10:55.418243 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrbxd" event={"ID":"05904be8-bef3-4082-b526-37fdf20ff892","Type":"ContainerStarted","Data":"8f1142c64c257955f33475cc27d4f3e5d039319596ce877d3faab87f1c5fd4df"} Jan 26 21:10:55 crc kubenswrapper[4899]: I0126 21:10:55.435289 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vrbxd" podStartSLOduration=2.004890971 podStartE2EDuration="4.435270079s" podCreationTimestamp="2026-01-26 21:10:51 +0000 UTC" firstStartedPulling="2026-01-26 21:10:52.399386953 +0000 UTC m=+941.780974990" lastFinishedPulling="2026-01-26 21:10:54.829766061 +0000 UTC m=+944.211354098" observedRunningTime="2026-01-26 21:10:55.433267282 +0000 UTC m=+944.814855319" watchObservedRunningTime="2026-01-26 21:10:55.435270079 +0000 UTC m=+944.816858116" Jan 26 21:11:00 crc kubenswrapper[4899]: I0126 21:11:00.109105 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:11:00 crc kubenswrapper[4899]: I0126 21:11:00.109499 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:11:01 crc kubenswrapper[4899]: I0126 21:11:01.470438 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:11:01 crc kubenswrapper[4899]: I0126 21:11:01.470510 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:11:01 crc kubenswrapper[4899]: I0126 21:11:01.513728 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:11:02 crc kubenswrapper[4899]: I0126 21:11:02.511233 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:11:04 crc kubenswrapper[4899]: I0126 21:11:04.116440 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kg2p8"] Jan 26 21:11:04 crc kubenswrapper[4899]: I0126 21:11:04.118695 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:04 crc kubenswrapper[4899]: I0126 21:11:04.129440 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kg2p8"] Jan 26 21:11:04 crc kubenswrapper[4899]: I0126 21:11:04.179908 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-catalog-content\") pod \"community-operators-kg2p8\" (UID: \"0e21bd0c-66bd-4570-875e-ad4238d5ac8b\") " pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:04 crc kubenswrapper[4899]: I0126 21:11:04.179980 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv75m\" (UniqueName: \"kubernetes.io/projected/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-kube-api-access-rv75m\") pod \"community-operators-kg2p8\" (UID: \"0e21bd0c-66bd-4570-875e-ad4238d5ac8b\") " pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:04 crc kubenswrapper[4899]: I0126 21:11:04.180007 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-utilities\") pod \"community-operators-kg2p8\" (UID: \"0e21bd0c-66bd-4570-875e-ad4238d5ac8b\") " pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:04 crc kubenswrapper[4899]: I0126 21:11:04.281543 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-catalog-content\") pod \"community-operators-kg2p8\" (UID: \"0e21bd0c-66bd-4570-875e-ad4238d5ac8b\") " pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:04 crc kubenswrapper[4899]: I0126 21:11:04.281585 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv75m\" (UniqueName: \"kubernetes.io/projected/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-kube-api-access-rv75m\") pod \"community-operators-kg2p8\" (UID: \"0e21bd0c-66bd-4570-875e-ad4238d5ac8b\") " pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:04 crc kubenswrapper[4899]: I0126 21:11:04.281603 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-utilities\") pod \"community-operators-kg2p8\" (UID: \"0e21bd0c-66bd-4570-875e-ad4238d5ac8b\") " pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:04 crc kubenswrapper[4899]: I0126 21:11:04.282114 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-utilities\") pod \"community-operators-kg2p8\" (UID: \"0e21bd0c-66bd-4570-875e-ad4238d5ac8b\") " pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:04 crc kubenswrapper[4899]: I0126 21:11:04.282263 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-catalog-content\") pod \"community-operators-kg2p8\" (UID: \"0e21bd0c-66bd-4570-875e-ad4238d5ac8b\") " pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:04 crc kubenswrapper[4899]: I0126 21:11:04.310657 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv75m\" (UniqueName: \"kubernetes.io/projected/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-kube-api-access-rv75m\") pod \"community-operators-kg2p8\" (UID: \"0e21bd0c-66bd-4570-875e-ad4238d5ac8b\") " pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:04 crc kubenswrapper[4899]: I0126 21:11:04.444784 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:05 crc kubenswrapper[4899]: I0126 21:11:05.002062 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kg2p8"] Jan 26 21:11:05 crc kubenswrapper[4899]: I0126 21:11:05.493291 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kg2p8" event={"ID":"0e21bd0c-66bd-4570-875e-ad4238d5ac8b","Type":"ContainerStarted","Data":"4f6eb6605b50a8f8962aab2a658d68d2f8d1d1d7a79565219430cbdd484e08af"} Jan 26 21:11:06 crc kubenswrapper[4899]: I0126 21:11:06.304384 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrbxd"] Jan 26 21:11:06 crc kubenswrapper[4899]: I0126 21:11:06.304672 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vrbxd" podUID="05904be8-bef3-4082-b526-37fdf20ff892" containerName="registry-server" containerID="cri-o://8f1142c64c257955f33475cc27d4f3e5d039319596ce877d3faab87f1c5fd4df" gracePeriod=2 Jan 26 21:11:06 crc kubenswrapper[4899]: I0126 21:11:06.506091 4899 generic.go:334] "Generic (PLEG): container finished" podID="05904be8-bef3-4082-b526-37fdf20ff892" containerID="8f1142c64c257955f33475cc27d4f3e5d039319596ce877d3faab87f1c5fd4df" exitCode=0 Jan 26 21:11:06 crc kubenswrapper[4899]: I0126 21:11:06.506289 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrbxd" event={"ID":"05904be8-bef3-4082-b526-37fdf20ff892","Type":"ContainerDied","Data":"8f1142c64c257955f33475cc27d4f3e5d039319596ce877d3faab87f1c5fd4df"} Jan 26 21:11:06 crc kubenswrapper[4899]: I0126 21:11:06.507916 4899 generic.go:334] "Generic (PLEG): container finished" podID="0e21bd0c-66bd-4570-875e-ad4238d5ac8b" containerID="61a0eadedda8fec34162e4c4341d650e0fd054a88622c15fdd4f347911b74177" exitCode=0 Jan 26 21:11:06 crc kubenswrapper[4899]: I0126 21:11:06.507948 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kg2p8" event={"ID":"0e21bd0c-66bd-4570-875e-ad4238d5ac8b","Type":"ContainerDied","Data":"61a0eadedda8fec34162e4c4341d650e0fd054a88622c15fdd4f347911b74177"} Jan 26 21:11:06 crc kubenswrapper[4899]: I0126 21:11:06.701484 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:11:06 crc kubenswrapper[4899]: I0126 21:11:06.825290 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05904be8-bef3-4082-b526-37fdf20ff892-catalog-content\") pod \"05904be8-bef3-4082-b526-37fdf20ff892\" (UID: \"05904be8-bef3-4082-b526-37fdf20ff892\") " Jan 26 21:11:06 crc kubenswrapper[4899]: I0126 21:11:06.825353 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgtzx\" (UniqueName: \"kubernetes.io/projected/05904be8-bef3-4082-b526-37fdf20ff892-kube-api-access-wgtzx\") pod \"05904be8-bef3-4082-b526-37fdf20ff892\" (UID: \"05904be8-bef3-4082-b526-37fdf20ff892\") " Jan 26 21:11:06 crc kubenswrapper[4899]: I0126 21:11:06.825635 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05904be8-bef3-4082-b526-37fdf20ff892-utilities\") pod \"05904be8-bef3-4082-b526-37fdf20ff892\" (UID: \"05904be8-bef3-4082-b526-37fdf20ff892\") " Jan 26 21:11:06 crc kubenswrapper[4899]: I0126 21:11:06.826258 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05904be8-bef3-4082-b526-37fdf20ff892-utilities" (OuterVolumeSpecName: "utilities") pod "05904be8-bef3-4082-b526-37fdf20ff892" (UID: "05904be8-bef3-4082-b526-37fdf20ff892"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:11:06 crc kubenswrapper[4899]: I0126 21:11:06.832708 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05904be8-bef3-4082-b526-37fdf20ff892-kube-api-access-wgtzx" (OuterVolumeSpecName: "kube-api-access-wgtzx") pod "05904be8-bef3-4082-b526-37fdf20ff892" (UID: "05904be8-bef3-4082-b526-37fdf20ff892"). InnerVolumeSpecName "kube-api-access-wgtzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:11:06 crc kubenswrapper[4899]: I0126 21:11:06.857190 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05904be8-bef3-4082-b526-37fdf20ff892-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05904be8-bef3-4082-b526-37fdf20ff892" (UID: "05904be8-bef3-4082-b526-37fdf20ff892"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:11:06 crc kubenswrapper[4899]: I0126 21:11:06.927168 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05904be8-bef3-4082-b526-37fdf20ff892-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 21:11:06 crc kubenswrapper[4899]: I0126 21:11:06.927200 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05904be8-bef3-4082-b526-37fdf20ff892-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 21:11:06 crc kubenswrapper[4899]: I0126 21:11:06.927215 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgtzx\" (UniqueName: \"kubernetes.io/projected/05904be8-bef3-4082-b526-37fdf20ff892-kube-api-access-wgtzx\") on node \"crc\" DevicePath \"\"" Jan 26 21:11:07 crc kubenswrapper[4899]: I0126 21:11:07.520018 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kg2p8" event={"ID":"0e21bd0c-66bd-4570-875e-ad4238d5ac8b","Type":"ContainerStarted","Data":"b84a23253d661496d25cbdcb69ecbacf9d3e4683db66202f33a06950b9daaa56"} Jan 26 21:11:07 crc kubenswrapper[4899]: I0126 21:11:07.522784 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" event={"ID":"e0134143-cc77-4e5e-8ae8-1e431f6e32bc","Type":"ContainerStarted","Data":"2824018950b84b0563c475c6bb42a452da4b695e93fa0b3167aeef1a27c8b630"} Jan 26 21:11:07 crc kubenswrapper[4899]: I0126 21:11:07.523114 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" Jan 26 21:11:07 crc kubenswrapper[4899]: I0126 21:11:07.524956 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrbxd" event={"ID":"05904be8-bef3-4082-b526-37fdf20ff892","Type":"ContainerDied","Data":"5c33c3b59b03fef85049152ee7ad7d14a746abf3886dfa28fb3ba7cf08036cec"} Jan 26 21:11:07 crc kubenswrapper[4899]: I0126 21:11:07.524986 4899 scope.go:117] "RemoveContainer" containerID="8f1142c64c257955f33475cc27d4f3e5d039319596ce877d3faab87f1c5fd4df" Jan 26 21:11:07 crc kubenswrapper[4899]: I0126 21:11:07.525182 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrbxd" Jan 26 21:11:07 crc kubenswrapper[4899]: I0126 21:11:07.546074 4899 scope.go:117] "RemoveContainer" containerID="ef754fa738654d163161e5de7036e14c0a50cf4f9a0a6ef6929a8f527c4c24bc" Jan 26 21:11:07 crc kubenswrapper[4899]: I0126 21:11:07.553998 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrbxd"] Jan 26 21:11:07 crc kubenswrapper[4899]: I0126 21:11:07.559214 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrbxd"] Jan 26 21:11:07 crc kubenswrapper[4899]: I0126 21:11:07.584806 4899 scope.go:117] "RemoveContainer" containerID="1c0e06dfff822a10105c49065ec67bb4f43550050e34b10ccf5319c43c0e61bf" Jan 26 21:11:07 crc kubenswrapper[4899]: I0126 21:11:07.585110 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" podStartSLOduration=3.5017369560000002 podStartE2EDuration="37.585099687s" podCreationTimestamp="2026-01-26 21:10:30 +0000 UTC" firstStartedPulling="2026-01-26 21:10:32.288064884 +0000 UTC m=+921.669652931" lastFinishedPulling="2026-01-26 21:11:06.371427625 +0000 UTC m=+955.753015662" observedRunningTime="2026-01-26 21:11:07.58100966 +0000 UTC m=+956.962597727" watchObservedRunningTime="2026-01-26 21:11:07.585099687 +0000 UTC m=+956.966687724" Jan 26 21:11:08 crc kubenswrapper[4899]: I0126 21:11:08.533082 4899 generic.go:334] "Generic (PLEG): container finished" podID="0e21bd0c-66bd-4570-875e-ad4238d5ac8b" containerID="b84a23253d661496d25cbdcb69ecbacf9d3e4683db66202f33a06950b9daaa56" exitCode=0 Jan 26 21:11:08 crc kubenswrapper[4899]: I0126 21:11:08.533203 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kg2p8" event={"ID":"0e21bd0c-66bd-4570-875e-ad4238d5ac8b","Type":"ContainerDied","Data":"b84a23253d661496d25cbdcb69ecbacf9d3e4683db66202f33a06950b9daaa56"} Jan 26 21:11:08 crc kubenswrapper[4899]: I0126 21:11:08.942949 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05904be8-bef3-4082-b526-37fdf20ff892" path="/var/lib/kubelet/pods/05904be8-bef3-4082-b526-37fdf20ff892/volumes" Jan 26 21:11:09 crc kubenswrapper[4899]: I0126 21:11:09.543233 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kg2p8" event={"ID":"0e21bd0c-66bd-4570-875e-ad4238d5ac8b","Type":"ContainerStarted","Data":"82ab58cced5688953f5ba0527456ceba1c8badcf03103664091e0f7dca6f305b"} Jan 26 21:11:09 crc kubenswrapper[4899]: I0126 21:11:09.559769 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kg2p8" podStartSLOduration=2.855675949 podStartE2EDuration="5.559747304s" podCreationTimestamp="2026-01-26 21:11:04 +0000 UTC" firstStartedPulling="2026-01-26 21:11:06.50956509 +0000 UTC m=+955.891153127" lastFinishedPulling="2026-01-26 21:11:09.213636435 +0000 UTC m=+958.595224482" observedRunningTime="2026-01-26 21:11:09.558005554 +0000 UTC m=+958.939593611" watchObservedRunningTime="2026-01-26 21:11:09.559747304 +0000 UTC m=+958.941335341" Jan 26 21:11:09 crc kubenswrapper[4899]: I0126 21:11:09.866262 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:11:11 crc kubenswrapper[4899]: I0126 21:11:11.310791 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" Jan 26 21:11:14 crc kubenswrapper[4899]: I0126 21:11:14.445552 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:14 crc kubenswrapper[4899]: I0126 21:11:14.445972 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:14 crc kubenswrapper[4899]: I0126 21:11:14.489915 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:14 crc kubenswrapper[4899]: I0126 21:11:14.619686 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.660856 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt"] Jan 26 21:11:15 crc kubenswrapper[4899]: E0126 21:11:15.661244 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05904be8-bef3-4082-b526-37fdf20ff892" containerName="registry-server" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.661262 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="05904be8-bef3-4082-b526-37fdf20ff892" containerName="registry-server" Jan 26 21:11:15 crc kubenswrapper[4899]: E0126 21:11:15.661292 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05904be8-bef3-4082-b526-37fdf20ff892" containerName="extract-utilities" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.661299 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="05904be8-bef3-4082-b526-37fdf20ff892" containerName="extract-utilities" Jan 26 21:11:15 crc kubenswrapper[4899]: E0126 21:11:15.661314 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05904be8-bef3-4082-b526-37fdf20ff892" containerName="extract-content" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.661322 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="05904be8-bef3-4082-b526-37fdf20ff892" containerName="extract-content" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.661459 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="05904be8-bef3-4082-b526-37fdf20ff892" containerName="registry-server" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.661939 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.664096 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"keystone-db-secret" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.667104 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/keystone-db-create-pmsq9"] Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.668139 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-db-create-pmsq9" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.682740 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/keystone-db-create-pmsq9"] Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.688658 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt"] Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.747798 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7gvz\" (UniqueName: \"kubernetes.io/projected/553baa7a-de49-4c87-9cb2-a57838ac671a-kube-api-access-d7gvz\") pod \"keystone-57b7-account-create-update-sgpdt\" (UID: \"553baa7a-de49-4c87-9cb2-a57838ac671a\") " pod="manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.747876 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcfbq\" (UniqueName: \"kubernetes.io/projected/7ddc2cab-c784-48b0-9ac8-202189823ab2-kube-api-access-jcfbq\") pod \"keystone-db-create-pmsq9\" (UID: \"7ddc2cab-c784-48b0-9ac8-202189823ab2\") " pod="manila-kuttl-tests/keystone-db-create-pmsq9" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.747902 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ddc2cab-c784-48b0-9ac8-202189823ab2-operator-scripts\") pod \"keystone-db-create-pmsq9\" (UID: \"7ddc2cab-c784-48b0-9ac8-202189823ab2\") " pod="manila-kuttl-tests/keystone-db-create-pmsq9" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.748191 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/553baa7a-de49-4c87-9cb2-a57838ac671a-operator-scripts\") pod \"keystone-57b7-account-create-update-sgpdt\" (UID: \"553baa7a-de49-4c87-9cb2-a57838ac671a\") " pod="manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.849587 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/553baa7a-de49-4c87-9cb2-a57838ac671a-operator-scripts\") pod \"keystone-57b7-account-create-update-sgpdt\" (UID: \"553baa7a-de49-4c87-9cb2-a57838ac671a\") " pod="manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.849703 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7gvz\" (UniqueName: \"kubernetes.io/projected/553baa7a-de49-4c87-9cb2-a57838ac671a-kube-api-access-d7gvz\") pod \"keystone-57b7-account-create-update-sgpdt\" (UID: \"553baa7a-de49-4c87-9cb2-a57838ac671a\") " pod="manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.849757 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ddc2cab-c784-48b0-9ac8-202189823ab2-operator-scripts\") pod \"keystone-db-create-pmsq9\" (UID: \"7ddc2cab-c784-48b0-9ac8-202189823ab2\") " pod="manila-kuttl-tests/keystone-db-create-pmsq9" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.849778 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcfbq\" (UniqueName: \"kubernetes.io/projected/7ddc2cab-c784-48b0-9ac8-202189823ab2-kube-api-access-jcfbq\") pod \"keystone-db-create-pmsq9\" (UID: \"7ddc2cab-c784-48b0-9ac8-202189823ab2\") " pod="manila-kuttl-tests/keystone-db-create-pmsq9" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.850539 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ddc2cab-c784-48b0-9ac8-202189823ab2-operator-scripts\") pod \"keystone-db-create-pmsq9\" (UID: \"7ddc2cab-c784-48b0-9ac8-202189823ab2\") " pod="manila-kuttl-tests/keystone-db-create-pmsq9" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.851028 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/553baa7a-de49-4c87-9cb2-a57838ac671a-operator-scripts\") pod \"keystone-57b7-account-create-update-sgpdt\" (UID: \"553baa7a-de49-4c87-9cb2-a57838ac671a\") " pod="manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.867210 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7gvz\" (UniqueName: \"kubernetes.io/projected/553baa7a-de49-4c87-9cb2-a57838ac671a-kube-api-access-d7gvz\") pod \"keystone-57b7-account-create-update-sgpdt\" (UID: \"553baa7a-de49-4c87-9cb2-a57838ac671a\") " pod="manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.867282 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcfbq\" (UniqueName: \"kubernetes.io/projected/7ddc2cab-c784-48b0-9ac8-202189823ab2-kube-api-access-jcfbq\") pod \"keystone-db-create-pmsq9\" (UID: \"7ddc2cab-c784-48b0-9ac8-202189823ab2\") " pod="manila-kuttl-tests/keystone-db-create-pmsq9" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.907815 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kg2p8"] Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.980827 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.989030 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-db-create-pmsq9" Jan 26 21:11:15 crc kubenswrapper[4899]: I0126 21:11:15.999870 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/ceph"] Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.000776 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/ceph" Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.003118 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"default-dockercfg-s77fb" Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.054656 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-run\") pod \"ceph\" (UID: \"951664be-c618-4a13-8265-32cf5a4d7cf1\") " pod="manila-kuttl-tests/ceph" Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.054707 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hsns\" (UniqueName: \"kubernetes.io/projected/951664be-c618-4a13-8265-32cf5a4d7cf1-kube-api-access-4hsns\") pod \"ceph\" (UID: \"951664be-c618-4a13-8265-32cf5a4d7cf1\") " pod="manila-kuttl-tests/ceph" Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.054758 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-log\") pod \"ceph\" (UID: \"951664be-c618-4a13-8265-32cf5a4d7cf1\") " pod="manila-kuttl-tests/ceph" Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.054810 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-data\") pod \"ceph\" (UID: \"951664be-c618-4a13-8265-32cf5a4d7cf1\") " pod="manila-kuttl-tests/ceph" Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.155768 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-data\") pod \"ceph\" (UID: \"951664be-c618-4a13-8265-32cf5a4d7cf1\") " pod="manila-kuttl-tests/ceph" Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.156145 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-run\") pod \"ceph\" (UID: \"951664be-c618-4a13-8265-32cf5a4d7cf1\") " pod="manila-kuttl-tests/ceph" Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.156189 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hsns\" (UniqueName: \"kubernetes.io/projected/951664be-c618-4a13-8265-32cf5a4d7cf1-kube-api-access-4hsns\") pod \"ceph\" (UID: \"951664be-c618-4a13-8265-32cf5a4d7cf1\") " pod="manila-kuttl-tests/ceph" Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.156223 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-log\") pod \"ceph\" (UID: \"951664be-c618-4a13-8265-32cf5a4d7cf1\") " pod="manila-kuttl-tests/ceph" Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.156440 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-data\") pod \"ceph\" (UID: \"951664be-c618-4a13-8265-32cf5a4d7cf1\") " pod="manila-kuttl-tests/ceph" Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.156649 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-run\") pod \"ceph\" (UID: \"951664be-c618-4a13-8265-32cf5a4d7cf1\") " pod="manila-kuttl-tests/ceph" Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.156663 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-log\") pod \"ceph\" (UID: \"951664be-c618-4a13-8265-32cf5a4d7cf1\") " pod="manila-kuttl-tests/ceph" Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.177598 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hsns\" (UniqueName: \"kubernetes.io/projected/951664be-c618-4a13-8265-32cf5a4d7cf1-kube-api-access-4hsns\") pod \"ceph\" (UID: \"951664be-c618-4a13-8265-32cf5a4d7cf1\") " pod="manila-kuttl-tests/ceph" Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.369203 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/ceph" Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.378536 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/keystone-db-create-pmsq9"] Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.533940 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt"] Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.599287 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/ceph" event={"ID":"951664be-c618-4a13-8265-32cf5a4d7cf1","Type":"ContainerStarted","Data":"cd7f6fa26b8a5a6d30eb55e7eef02bbc78970f8fe4c5ac26927eb2fa291b67ee"} Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.604812 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt" event={"ID":"553baa7a-de49-4c87-9cb2-a57838ac671a","Type":"ContainerStarted","Data":"0d3606e450a921d2db2b6dade99f9b09cebb485d20be61f33133376c89352cbd"} Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.606862 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-db-create-pmsq9" event={"ID":"7ddc2cab-c784-48b0-9ac8-202189823ab2","Type":"ContainerStarted","Data":"dfe158e86a56d6fb60cd05721e6bed77cd1a660383211f9974c1264c339ff3b5"} Jan 26 21:11:16 crc kubenswrapper[4899]: I0126 21:11:16.607064 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kg2p8" podUID="0e21bd0c-66bd-4570-875e-ad4238d5ac8b" containerName="registry-server" containerID="cri-o://82ab58cced5688953f5ba0527456ceba1c8badcf03103664091e0f7dca6f305b" gracePeriod=2 Jan 26 21:11:17 crc kubenswrapper[4899]: I0126 21:11:17.615153 4899 generic.go:334] "Generic (PLEG): container finished" podID="0e21bd0c-66bd-4570-875e-ad4238d5ac8b" containerID="82ab58cced5688953f5ba0527456ceba1c8badcf03103664091e0f7dca6f305b" exitCode=0 Jan 26 21:11:17 crc kubenswrapper[4899]: I0126 21:11:17.615229 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kg2p8" event={"ID":"0e21bd0c-66bd-4570-875e-ad4238d5ac8b","Type":"ContainerDied","Data":"82ab58cced5688953f5ba0527456ceba1c8badcf03103664091e0f7dca6f305b"} Jan 26 21:11:17 crc kubenswrapper[4899]: I0126 21:11:17.617320 4899 generic.go:334] "Generic (PLEG): container finished" podID="7ddc2cab-c784-48b0-9ac8-202189823ab2" containerID="a23c18f9f54b53c233d0fb7b0cc84351b4afa0e96471c277b8e2870d151fafb3" exitCode=0 Jan 26 21:11:17 crc kubenswrapper[4899]: I0126 21:11:17.617402 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-db-create-pmsq9" event={"ID":"7ddc2cab-c784-48b0-9ac8-202189823ab2","Type":"ContainerDied","Data":"a23c18f9f54b53c233d0fb7b0cc84351b4afa0e96471c277b8e2870d151fafb3"} Jan 26 21:11:17 crc kubenswrapper[4899]: I0126 21:11:17.620401 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt" event={"ID":"553baa7a-de49-4c87-9cb2-a57838ac671a","Type":"ContainerStarted","Data":"0e78b16a213017dbe04ebf891ddcbcf672337af40f3e3e0e5b75c31e2719551f"} Jan 26 21:11:17 crc kubenswrapper[4899]: I0126 21:11:17.688357 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt" podStartSLOduration=2.688338376 podStartE2EDuration="2.688338376s" podCreationTimestamp="2026-01-26 21:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:11:17.687897244 +0000 UTC m=+967.069485291" watchObservedRunningTime="2026-01-26 21:11:17.688338376 +0000 UTC m=+967.069926433" Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.097705 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.198412 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-utilities\") pod \"0e21bd0c-66bd-4570-875e-ad4238d5ac8b\" (UID: \"0e21bd0c-66bd-4570-875e-ad4238d5ac8b\") " Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.198696 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-catalog-content\") pod \"0e21bd0c-66bd-4570-875e-ad4238d5ac8b\" (UID: \"0e21bd0c-66bd-4570-875e-ad4238d5ac8b\") " Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.198775 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv75m\" (UniqueName: \"kubernetes.io/projected/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-kube-api-access-rv75m\") pod \"0e21bd0c-66bd-4570-875e-ad4238d5ac8b\" (UID: \"0e21bd0c-66bd-4570-875e-ad4238d5ac8b\") " Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.199521 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-utilities" (OuterVolumeSpecName: "utilities") pod "0e21bd0c-66bd-4570-875e-ad4238d5ac8b" (UID: "0e21bd0c-66bd-4570-875e-ad4238d5ac8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.204809 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-kube-api-access-rv75m" (OuterVolumeSpecName: "kube-api-access-rv75m") pod "0e21bd0c-66bd-4570-875e-ad4238d5ac8b" (UID: "0e21bd0c-66bd-4570-875e-ad4238d5ac8b"). InnerVolumeSpecName "kube-api-access-rv75m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.272806 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e21bd0c-66bd-4570-875e-ad4238d5ac8b" (UID: "0e21bd0c-66bd-4570-875e-ad4238d5ac8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.301091 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.301141 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.301158 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv75m\" (UniqueName: \"kubernetes.io/projected/0e21bd0c-66bd-4570-875e-ad4238d5ac8b-kube-api-access-rv75m\") on node \"crc\" DevicePath \"\"" Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.632427 4899 generic.go:334] "Generic (PLEG): container finished" podID="553baa7a-de49-4c87-9cb2-a57838ac671a" containerID="0e78b16a213017dbe04ebf891ddcbcf672337af40f3e3e0e5b75c31e2719551f" exitCode=0 Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.632512 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt" event={"ID":"553baa7a-de49-4c87-9cb2-a57838ac671a","Type":"ContainerDied","Data":"0e78b16a213017dbe04ebf891ddcbcf672337af40f3e3e0e5b75c31e2719551f"} Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.638688 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kg2p8" Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.638684 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kg2p8" event={"ID":"0e21bd0c-66bd-4570-875e-ad4238d5ac8b","Type":"ContainerDied","Data":"4f6eb6605b50a8f8962aab2a658d68d2f8d1d1d7a79565219430cbdd484e08af"} Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.638900 4899 scope.go:117] "RemoveContainer" containerID="82ab58cced5688953f5ba0527456ceba1c8badcf03103664091e0f7dca6f305b" Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.664238 4899 scope.go:117] "RemoveContainer" containerID="b84a23253d661496d25cbdcb69ecbacf9d3e4683db66202f33a06950b9daaa56" Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.689177 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kg2p8"] Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.690519 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kg2p8"] Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.703863 4899 scope.go:117] "RemoveContainer" containerID="61a0eadedda8fec34162e4c4341d650e0fd054a88622c15fdd4f347911b74177" Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.874955 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-db-create-pmsq9" Jan 26 21:11:18 crc kubenswrapper[4899]: I0126 21:11:18.942534 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e21bd0c-66bd-4570-875e-ad4238d5ac8b" path="/var/lib/kubelet/pods/0e21bd0c-66bd-4570-875e-ad4238d5ac8b/volumes" Jan 26 21:11:19 crc kubenswrapper[4899]: I0126 21:11:19.009949 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ddc2cab-c784-48b0-9ac8-202189823ab2-operator-scripts\") pod \"7ddc2cab-c784-48b0-9ac8-202189823ab2\" (UID: \"7ddc2cab-c784-48b0-9ac8-202189823ab2\") " Jan 26 21:11:19 crc kubenswrapper[4899]: I0126 21:11:19.010053 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcfbq\" (UniqueName: \"kubernetes.io/projected/7ddc2cab-c784-48b0-9ac8-202189823ab2-kube-api-access-jcfbq\") pod \"7ddc2cab-c784-48b0-9ac8-202189823ab2\" (UID: \"7ddc2cab-c784-48b0-9ac8-202189823ab2\") " Jan 26 21:11:19 crc kubenswrapper[4899]: I0126 21:11:19.010848 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ddc2cab-c784-48b0-9ac8-202189823ab2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ddc2cab-c784-48b0-9ac8-202189823ab2" (UID: "7ddc2cab-c784-48b0-9ac8-202189823ab2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:11:19 crc kubenswrapper[4899]: I0126 21:11:19.014521 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ddc2cab-c784-48b0-9ac8-202189823ab2-kube-api-access-jcfbq" (OuterVolumeSpecName: "kube-api-access-jcfbq") pod "7ddc2cab-c784-48b0-9ac8-202189823ab2" (UID: "7ddc2cab-c784-48b0-9ac8-202189823ab2"). InnerVolumeSpecName "kube-api-access-jcfbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:11:19 crc kubenswrapper[4899]: I0126 21:11:19.111852 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcfbq\" (UniqueName: \"kubernetes.io/projected/7ddc2cab-c784-48b0-9ac8-202189823ab2-kube-api-access-jcfbq\") on node \"crc\" DevicePath \"\"" Jan 26 21:11:19 crc kubenswrapper[4899]: I0126 21:11:19.111897 4899 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ddc2cab-c784-48b0-9ac8-202189823ab2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:11:19 crc kubenswrapper[4899]: I0126 21:11:19.648067 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-db-create-pmsq9" event={"ID":"7ddc2cab-c784-48b0-9ac8-202189823ab2","Type":"ContainerDied","Data":"dfe158e86a56d6fb60cd05721e6bed77cd1a660383211f9974c1264c339ff3b5"} Jan 26 21:11:19 crc kubenswrapper[4899]: I0126 21:11:19.648471 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfe158e86a56d6fb60cd05721e6bed77cd1a660383211f9974c1264c339ff3b5" Jan 26 21:11:19 crc kubenswrapper[4899]: I0126 21:11:19.648122 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-db-create-pmsq9" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.013621 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.065482 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7gvz\" (UniqueName: \"kubernetes.io/projected/553baa7a-de49-4c87-9cb2-a57838ac671a-kube-api-access-d7gvz\") pod \"553baa7a-de49-4c87-9cb2-a57838ac671a\" (UID: \"553baa7a-de49-4c87-9cb2-a57838ac671a\") " Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.065646 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/553baa7a-de49-4c87-9cb2-a57838ac671a-operator-scripts\") pod \"553baa7a-de49-4c87-9cb2-a57838ac671a\" (UID: \"553baa7a-de49-4c87-9cb2-a57838ac671a\") " Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.066135 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/553baa7a-de49-4c87-9cb2-a57838ac671a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "553baa7a-de49-4c87-9cb2-a57838ac671a" (UID: "553baa7a-de49-4c87-9cb2-a57838ac671a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.083082 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553baa7a-de49-4c87-9cb2-a57838ac671a-kube-api-access-d7gvz" (OuterVolumeSpecName: "kube-api-access-d7gvz") pod "553baa7a-de49-4c87-9cb2-a57838ac671a" (UID: "553baa7a-de49-4c87-9cb2-a57838ac671a"). InnerVolumeSpecName "kube-api-access-d7gvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.167614 4899 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/553baa7a-de49-4c87-9cb2-a57838ac671a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.167647 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7gvz\" (UniqueName: \"kubernetes.io/projected/553baa7a-de49-4c87-9cb2-a57838ac671a-kube-api-access-d7gvz\") on node \"crc\" DevicePath \"\"" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.222730 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gd9qm"] Jan 26 21:11:22 crc kubenswrapper[4899]: E0126 21:11:22.223356 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e21bd0c-66bd-4570-875e-ad4238d5ac8b" containerName="registry-server" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.223387 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e21bd0c-66bd-4570-875e-ad4238d5ac8b" containerName="registry-server" Jan 26 21:11:22 crc kubenswrapper[4899]: E0126 21:11:22.223404 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e21bd0c-66bd-4570-875e-ad4238d5ac8b" containerName="extract-utilities" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.223418 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e21bd0c-66bd-4570-875e-ad4238d5ac8b" containerName="extract-utilities" Jan 26 21:11:22 crc kubenswrapper[4899]: E0126 21:11:22.223441 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e21bd0c-66bd-4570-875e-ad4238d5ac8b" containerName="extract-content" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.223455 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e21bd0c-66bd-4570-875e-ad4238d5ac8b" containerName="extract-content" Jan 26 21:11:22 crc kubenswrapper[4899]: E0126 21:11:22.223474 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553baa7a-de49-4c87-9cb2-a57838ac671a" containerName="mariadb-account-create-update" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.223486 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="553baa7a-de49-4c87-9cb2-a57838ac671a" containerName="mariadb-account-create-update" Jan 26 21:11:22 crc kubenswrapper[4899]: E0126 21:11:22.223517 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ddc2cab-c784-48b0-9ac8-202189823ab2" containerName="mariadb-database-create" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.223529 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ddc2cab-c784-48b0-9ac8-202189823ab2" containerName="mariadb-database-create" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.223799 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="553baa7a-de49-4c87-9cb2-a57838ac671a" containerName="mariadb-account-create-update" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.223821 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e21bd0c-66bd-4570-875e-ad4238d5ac8b" containerName="registry-server" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.223841 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ddc2cab-c784-48b0-9ac8-202189823ab2" containerName="mariadb-database-create" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.225490 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.228811 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gd9qm"] Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.370120 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a2c7a03-b093-4607-9583-30dff2d55ad4-catalog-content\") pod \"redhat-operators-gd9qm\" (UID: \"4a2c7a03-b093-4607-9583-30dff2d55ad4\") " pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.370260 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a2c7a03-b093-4607-9583-30dff2d55ad4-utilities\") pod \"redhat-operators-gd9qm\" (UID: \"4a2c7a03-b093-4607-9583-30dff2d55ad4\") " pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.370305 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sprwg\" (UniqueName: \"kubernetes.io/projected/4a2c7a03-b093-4607-9583-30dff2d55ad4-kube-api-access-sprwg\") pod \"redhat-operators-gd9qm\" (UID: \"4a2c7a03-b093-4607-9583-30dff2d55ad4\") " pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.471732 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sprwg\" (UniqueName: \"kubernetes.io/projected/4a2c7a03-b093-4607-9583-30dff2d55ad4-kube-api-access-sprwg\") pod \"redhat-operators-gd9qm\" (UID: \"4a2c7a03-b093-4607-9583-30dff2d55ad4\") " pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.471842 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a2c7a03-b093-4607-9583-30dff2d55ad4-catalog-content\") pod \"redhat-operators-gd9qm\" (UID: \"4a2c7a03-b093-4607-9583-30dff2d55ad4\") " pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.471907 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a2c7a03-b093-4607-9583-30dff2d55ad4-utilities\") pod \"redhat-operators-gd9qm\" (UID: \"4a2c7a03-b093-4607-9583-30dff2d55ad4\") " pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.472373 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a2c7a03-b093-4607-9583-30dff2d55ad4-catalog-content\") pod \"redhat-operators-gd9qm\" (UID: \"4a2c7a03-b093-4607-9583-30dff2d55ad4\") " pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.472415 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a2c7a03-b093-4607-9583-30dff2d55ad4-utilities\") pod \"redhat-operators-gd9qm\" (UID: \"4a2c7a03-b093-4607-9583-30dff2d55ad4\") " pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.492426 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sprwg\" (UniqueName: \"kubernetes.io/projected/4a2c7a03-b093-4607-9583-30dff2d55ad4-kube-api-access-sprwg\") pod \"redhat-operators-gd9qm\" (UID: \"4a2c7a03-b093-4607-9583-30dff2d55ad4\") " pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.579280 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.678036 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt" event={"ID":"553baa7a-de49-4c87-9cb2-a57838ac671a","Type":"ContainerDied","Data":"0d3606e450a921d2db2b6dade99f9b09cebb485d20be61f33133376c89352cbd"} Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.678349 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d3606e450a921d2db2b6dade99f9b09cebb485d20be61f33133376c89352cbd" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.678111 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt" Jan 26 21:11:22 crc kubenswrapper[4899]: I0126 21:11:22.862358 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gd9qm"] Jan 26 21:11:23 crc kubenswrapper[4899]: I0126 21:11:23.686223 4899 generic.go:334] "Generic (PLEG): container finished" podID="4a2c7a03-b093-4607-9583-30dff2d55ad4" containerID="eb776f8707281671df0dc5d879ec89334878123289a54deeda83c995e98a7c7a" exitCode=0 Jan 26 21:11:23 crc kubenswrapper[4899]: I0126 21:11:23.686372 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gd9qm" event={"ID":"4a2c7a03-b093-4607-9583-30dff2d55ad4","Type":"ContainerDied","Data":"eb776f8707281671df0dc5d879ec89334878123289a54deeda83c995e98a7c7a"} Jan 26 21:11:23 crc kubenswrapper[4899]: I0126 21:11:23.686573 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gd9qm" event={"ID":"4a2c7a03-b093-4607-9583-30dff2d55ad4","Type":"ContainerStarted","Data":"d97f939896db4dd94e8fd06a04ac1723af376dd2791342c9ba879e5c81cb0c68"} Jan 26 21:11:24 crc kubenswrapper[4899]: I0126 21:11:24.706913 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gd9qm" event={"ID":"4a2c7a03-b093-4607-9583-30dff2d55ad4","Type":"ContainerStarted","Data":"b8dc944b6479c8b10666926db83778156c18e86b3d5c33cea4c59da271531769"} Jan 26 21:11:26 crc kubenswrapper[4899]: I0126 21:11:26.159507 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/keystone-db-sync-2dc4c"] Jan 26 21:11:26 crc kubenswrapper[4899]: I0126 21:11:26.160340 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-db-sync-2dc4c" Jan 26 21:11:26 crc kubenswrapper[4899]: I0126 21:11:26.163039 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"keystone-scripts" Jan 26 21:11:26 crc kubenswrapper[4899]: I0126 21:11:26.163175 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"keystone-config-data" Jan 26 21:11:26 crc kubenswrapper[4899]: I0126 21:11:26.163253 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"keystone" Jan 26 21:11:26 crc kubenswrapper[4899]: I0126 21:11:26.163550 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"keystone-keystone-dockercfg-4bxqr" Jan 26 21:11:26 crc kubenswrapper[4899]: I0126 21:11:26.176372 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/keystone-db-sync-2dc4c"] Jan 26 21:11:26 crc kubenswrapper[4899]: I0126 21:11:26.229804 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8596bee1-b6cc-499d-b944-7e6732399d9b-config-data\") pod \"keystone-db-sync-2dc4c\" (UID: \"8596bee1-b6cc-499d-b944-7e6732399d9b\") " pod="manila-kuttl-tests/keystone-db-sync-2dc4c" Jan 26 21:11:26 crc kubenswrapper[4899]: I0126 21:11:26.229854 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h99g8\" (UniqueName: \"kubernetes.io/projected/8596bee1-b6cc-499d-b944-7e6732399d9b-kube-api-access-h99g8\") pod \"keystone-db-sync-2dc4c\" (UID: \"8596bee1-b6cc-499d-b944-7e6732399d9b\") " pod="manila-kuttl-tests/keystone-db-sync-2dc4c" Jan 26 21:11:26 crc kubenswrapper[4899]: I0126 21:11:26.330699 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8596bee1-b6cc-499d-b944-7e6732399d9b-config-data\") pod \"keystone-db-sync-2dc4c\" (UID: \"8596bee1-b6cc-499d-b944-7e6732399d9b\") " pod="manila-kuttl-tests/keystone-db-sync-2dc4c" Jan 26 21:11:26 crc kubenswrapper[4899]: I0126 21:11:26.330752 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h99g8\" (UniqueName: \"kubernetes.io/projected/8596bee1-b6cc-499d-b944-7e6732399d9b-kube-api-access-h99g8\") pod \"keystone-db-sync-2dc4c\" (UID: \"8596bee1-b6cc-499d-b944-7e6732399d9b\") " pod="manila-kuttl-tests/keystone-db-sync-2dc4c" Jan 26 21:11:26 crc kubenswrapper[4899]: I0126 21:11:26.338458 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8596bee1-b6cc-499d-b944-7e6732399d9b-config-data\") pod \"keystone-db-sync-2dc4c\" (UID: \"8596bee1-b6cc-499d-b944-7e6732399d9b\") " pod="manila-kuttl-tests/keystone-db-sync-2dc4c" Jan 26 21:11:26 crc kubenswrapper[4899]: I0126 21:11:26.346985 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h99g8\" (UniqueName: \"kubernetes.io/projected/8596bee1-b6cc-499d-b944-7e6732399d9b-kube-api-access-h99g8\") pod \"keystone-db-sync-2dc4c\" (UID: \"8596bee1-b6cc-499d-b944-7e6732399d9b\") " pod="manila-kuttl-tests/keystone-db-sync-2dc4c" Jan 26 21:11:26 crc kubenswrapper[4899]: I0126 21:11:26.483501 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-db-sync-2dc4c" Jan 26 21:11:26 crc kubenswrapper[4899]: I0126 21:11:26.729877 4899 generic.go:334] "Generic (PLEG): container finished" podID="4a2c7a03-b093-4607-9583-30dff2d55ad4" containerID="b8dc944b6479c8b10666926db83778156c18e86b3d5c33cea4c59da271531769" exitCode=0 Jan 26 21:11:26 crc kubenswrapper[4899]: I0126 21:11:26.730202 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gd9qm" event={"ID":"4a2c7a03-b093-4607-9583-30dff2d55ad4","Type":"ContainerDied","Data":"b8dc944b6479c8b10666926db83778156c18e86b3d5c33cea4c59da271531769"} Jan 26 21:11:27 crc kubenswrapper[4899]: I0126 21:11:27.052867 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/keystone-db-sync-2dc4c"] Jan 26 21:11:27 crc kubenswrapper[4899]: W0126 21:11:27.078483 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8596bee1_b6cc_499d_b944_7e6732399d9b.slice/crio-f62705d83fb1b1917d282992d552206a1ab7aeb23d3b56485c9f60d1e0eb28b2 WatchSource:0}: Error finding container f62705d83fb1b1917d282992d552206a1ab7aeb23d3b56485c9f60d1e0eb28b2: Status 404 returned error can't find the container with id f62705d83fb1b1917d282992d552206a1ab7aeb23d3b56485c9f60d1e0eb28b2 Jan 26 21:11:27 crc kubenswrapper[4899]: I0126 21:11:27.740733 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gd9qm" event={"ID":"4a2c7a03-b093-4607-9583-30dff2d55ad4","Type":"ContainerStarted","Data":"1ee0e5a3a46d923029d640bbb0a56353e1e501a27e7d1e347ea544e6500463cf"} Jan 26 21:11:27 crc kubenswrapper[4899]: I0126 21:11:27.742705 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-db-sync-2dc4c" event={"ID":"8596bee1-b6cc-499d-b944-7e6732399d9b","Type":"ContainerStarted","Data":"f62705d83fb1b1917d282992d552206a1ab7aeb23d3b56485c9f60d1e0eb28b2"} Jan 26 21:11:27 crc kubenswrapper[4899]: I0126 21:11:27.761948 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gd9qm" podStartSLOduration=2.318768059 podStartE2EDuration="5.761914747s" podCreationTimestamp="2026-01-26 21:11:22 +0000 UTC" firstStartedPulling="2026-01-26 21:11:23.688354428 +0000 UTC m=+973.069942465" lastFinishedPulling="2026-01-26 21:11:27.131501116 +0000 UTC m=+976.513089153" observedRunningTime="2026-01-26 21:11:27.757125743 +0000 UTC m=+977.138713780" watchObservedRunningTime="2026-01-26 21:11:27.761914747 +0000 UTC m=+977.143502784" Jan 26 21:11:30 crc kubenswrapper[4899]: I0126 21:11:30.112760 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:11:30 crc kubenswrapper[4899]: I0126 21:11:30.113139 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:11:30 crc kubenswrapper[4899]: I0126 21:11:30.113191 4899 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 21:11:30 crc kubenswrapper[4899]: I0126 21:11:30.113791 4899 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa9a721fc5929ae1bb2ab8e526b3f1d389e06cec08eea583da85a23029b223fe"} pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 21:11:30 crc kubenswrapper[4899]: I0126 21:11:30.113834 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" containerID="cri-o://aa9a721fc5929ae1bb2ab8e526b3f1d389e06cec08eea583da85a23029b223fe" gracePeriod=600 Jan 26 21:11:32 crc kubenswrapper[4899]: I0126 21:11:32.581044 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:32 crc kubenswrapper[4899]: I0126 21:11:32.581377 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:32 crc kubenswrapper[4899]: I0126 21:11:32.784275 4899 generic.go:334] "Generic (PLEG): container finished" podID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerID="aa9a721fc5929ae1bb2ab8e526b3f1d389e06cec08eea583da85a23029b223fe" exitCode=0 Jan 26 21:11:32 crc kubenswrapper[4899]: I0126 21:11:32.784330 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerDied","Data":"aa9a721fc5929ae1bb2ab8e526b3f1d389e06cec08eea583da85a23029b223fe"} Jan 26 21:11:32 crc kubenswrapper[4899]: I0126 21:11:32.784423 4899 scope.go:117] "RemoveContainer" containerID="d398f4687edca03fbdacf22de0045af2ea5d8affbf070e2faa1f8131fff946bc" Jan 26 21:11:33 crc kubenswrapper[4899]: I0126 21:11:33.628720 4899 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gd9qm" podUID="4a2c7a03-b093-4607-9583-30dff2d55ad4" containerName="registry-server" probeResult="failure" output=< Jan 26 21:11:33 crc kubenswrapper[4899]: timeout: failed to connect service ":50051" within 1s Jan 26 21:11:33 crc kubenswrapper[4899]: > Jan 26 21:11:42 crc kubenswrapper[4899]: I0126 21:11:42.652686 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:42 crc kubenswrapper[4899]: I0126 21:11:42.726511 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:42 crc kubenswrapper[4899]: I0126 21:11:42.917304 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gd9qm"] Jan 26 21:11:43 crc kubenswrapper[4899]: I0126 21:11:43.875179 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gd9qm" podUID="4a2c7a03-b093-4607-9583-30dff2d55ad4" containerName="registry-server" containerID="cri-o://1ee0e5a3a46d923029d640bbb0a56353e1e501a27e7d1e347ea544e6500463cf" gracePeriod=2 Jan 26 21:11:44 crc kubenswrapper[4899]: I0126 21:11:44.893313 4899 generic.go:334] "Generic (PLEG): container finished" podID="4a2c7a03-b093-4607-9583-30dff2d55ad4" containerID="1ee0e5a3a46d923029d640bbb0a56353e1e501a27e7d1e347ea544e6500463cf" exitCode=0 Jan 26 21:11:44 crc kubenswrapper[4899]: I0126 21:11:44.893423 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gd9qm" event={"ID":"4a2c7a03-b093-4607-9583-30dff2d55ad4","Type":"ContainerDied","Data":"1ee0e5a3a46d923029d640bbb0a56353e1e501a27e7d1e347ea544e6500463cf"} Jan 26 21:11:45 crc kubenswrapper[4899]: E0126 21:11:45.828289 4899 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/ceph/demo:latest-squid" Jan 26 21:11:45 crc kubenswrapper[4899]: E0126 21:11:45.828637 4899 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceph,Image:quay.io/ceph/demo:latest-squid,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:MON_IP,Value:192.168.126.11,ValueFrom:nil,},EnvVar{Name:CEPH_DAEMON,Value:demo,ValueFrom:nil,},EnvVar{Name:CEPH_PUBLIC_NETWORK,Value:0.0.0.0/0,ValueFrom:nil,},EnvVar{Name:DEMO_DAEMONS,Value:osd,mds,rgw,ValueFrom:nil,},EnvVar{Name:CEPH_DEMO_UID,Value:0,ValueFrom:nil,},EnvVar{Name:RGW_NAME,Value:ceph,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/var/lib/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log,ReadOnly:false,MountPath:/var/log/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4hsns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceph_manila-kuttl-tests(951664be-c618-4a13-8265-32cf5a4d7cf1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 21:11:45 crc kubenswrapper[4899]: E0126 21:11:45.829746 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceph\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="manila-kuttl-tests/ceph" podUID="951664be-c618-4a13-8265-32cf5a4d7cf1" Jan 26 21:11:45 crc kubenswrapper[4899]: I0126 21:11:45.847805 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:45 crc kubenswrapper[4899]: I0126 21:11:45.908436 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gd9qm" Jan 26 21:11:45 crc kubenswrapper[4899]: I0126 21:11:45.908619 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gd9qm" event={"ID":"4a2c7a03-b093-4607-9583-30dff2d55ad4","Type":"ContainerDied","Data":"d97f939896db4dd94e8fd06a04ac1723af376dd2791342c9ba879e5c81cb0c68"} Jan 26 21:11:45 crc kubenswrapper[4899]: I0126 21:11:45.908654 4899 scope.go:117] "RemoveContainer" containerID="1ee0e5a3a46d923029d640bbb0a56353e1e501a27e7d1e347ea544e6500463cf" Jan 26 21:11:45 crc kubenswrapper[4899]: E0126 21:11:45.911510 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceph\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/ceph/demo:latest-squid\\\"\"" pod="manila-kuttl-tests/ceph" podUID="951664be-c618-4a13-8265-32cf5a4d7cf1" Jan 26 21:11:45 crc kubenswrapper[4899]: I0126 21:11:45.935437 4899 scope.go:117] "RemoveContainer" containerID="b8dc944b6479c8b10666926db83778156c18e86b3d5c33cea4c59da271531769" Jan 26 21:11:45 crc kubenswrapper[4899]: I0126 21:11:45.957462 4899 scope.go:117] "RemoveContainer" containerID="eb776f8707281671df0dc5d879ec89334878123289a54deeda83c995e98a7c7a" Jan 26 21:11:45 crc kubenswrapper[4899]: I0126 21:11:45.998470 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sprwg\" (UniqueName: \"kubernetes.io/projected/4a2c7a03-b093-4607-9583-30dff2d55ad4-kube-api-access-sprwg\") pod \"4a2c7a03-b093-4607-9583-30dff2d55ad4\" (UID: \"4a2c7a03-b093-4607-9583-30dff2d55ad4\") " Jan 26 21:11:45 crc kubenswrapper[4899]: I0126 21:11:45.998587 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a2c7a03-b093-4607-9583-30dff2d55ad4-catalog-content\") pod \"4a2c7a03-b093-4607-9583-30dff2d55ad4\" (UID: \"4a2c7a03-b093-4607-9583-30dff2d55ad4\") " Jan 26 21:11:45 crc kubenswrapper[4899]: I0126 21:11:45.998629 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a2c7a03-b093-4607-9583-30dff2d55ad4-utilities\") pod \"4a2c7a03-b093-4607-9583-30dff2d55ad4\" (UID: \"4a2c7a03-b093-4607-9583-30dff2d55ad4\") " Jan 26 21:11:46 crc kubenswrapper[4899]: I0126 21:11:46.000779 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a2c7a03-b093-4607-9583-30dff2d55ad4-utilities" (OuterVolumeSpecName: "utilities") pod "4a2c7a03-b093-4607-9583-30dff2d55ad4" (UID: "4a2c7a03-b093-4607-9583-30dff2d55ad4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:11:46 crc kubenswrapper[4899]: I0126 21:11:46.005337 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a2c7a03-b093-4607-9583-30dff2d55ad4-kube-api-access-sprwg" (OuterVolumeSpecName: "kube-api-access-sprwg") pod "4a2c7a03-b093-4607-9583-30dff2d55ad4" (UID: "4a2c7a03-b093-4607-9583-30dff2d55ad4"). InnerVolumeSpecName "kube-api-access-sprwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:11:46 crc kubenswrapper[4899]: I0126 21:11:46.104275 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sprwg\" (UniqueName: \"kubernetes.io/projected/4a2c7a03-b093-4607-9583-30dff2d55ad4-kube-api-access-sprwg\") on node \"crc\" DevicePath \"\"" Jan 26 21:11:46 crc kubenswrapper[4899]: I0126 21:11:46.104380 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a2c7a03-b093-4607-9583-30dff2d55ad4-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 21:11:46 crc kubenswrapper[4899]: I0126 21:11:46.117085 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a2c7a03-b093-4607-9583-30dff2d55ad4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a2c7a03-b093-4607-9583-30dff2d55ad4" (UID: "4a2c7a03-b093-4607-9583-30dff2d55ad4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:11:46 crc kubenswrapper[4899]: I0126 21:11:46.205420 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a2c7a03-b093-4607-9583-30dff2d55ad4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 21:11:46 crc kubenswrapper[4899]: I0126 21:11:46.249878 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gd9qm"] Jan 26 21:11:46 crc kubenswrapper[4899]: I0126 21:11:46.260916 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gd9qm"] Jan 26 21:11:46 crc kubenswrapper[4899]: I0126 21:11:46.917871 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerStarted","Data":"4c6a068c1dcea571cec247005b623b6639c13ba7d6fb0ff472c9f5743612c521"} Jan 26 21:11:46 crc kubenswrapper[4899]: I0126 21:11:46.955223 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a2c7a03-b093-4607-9583-30dff2d55ad4" path="/var/lib/kubelet/pods/4a2c7a03-b093-4607-9583-30dff2d55ad4/volumes" Jan 26 21:11:51 crc kubenswrapper[4899]: I0126 21:11:51.965072 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-db-sync-2dc4c" event={"ID":"8596bee1-b6cc-499d-b944-7e6732399d9b","Type":"ContainerStarted","Data":"2528605dc2ee759014314217bc33a4b9311bfb4874ac4288f66b4c65a6e048ba"} Jan 26 21:11:51 crc kubenswrapper[4899]: I0126 21:11:51.987760 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/keystone-db-sync-2dc4c" podStartSLOduration=1.770094265 podStartE2EDuration="25.98772986s" podCreationTimestamp="2026-01-26 21:11:26 +0000 UTC" firstStartedPulling="2026-01-26 21:11:27.080988976 +0000 UTC m=+976.462577003" lastFinishedPulling="2026-01-26 21:11:51.298624541 +0000 UTC m=+1000.680212598" observedRunningTime="2026-01-26 21:11:51.984473649 +0000 UTC m=+1001.366061706" watchObservedRunningTime="2026-01-26 21:11:51.98772986 +0000 UTC m=+1001.369317937" Jan 26 21:11:56 crc kubenswrapper[4899]: I0126 21:11:56.998910 4899 generic.go:334] "Generic (PLEG): container finished" podID="8596bee1-b6cc-499d-b944-7e6732399d9b" containerID="2528605dc2ee759014314217bc33a4b9311bfb4874ac4288f66b4c65a6e048ba" exitCode=0 Jan 26 21:11:56 crc kubenswrapper[4899]: I0126 21:11:56.999000 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-db-sync-2dc4c" event={"ID":"8596bee1-b6cc-499d-b944-7e6732399d9b","Type":"ContainerDied","Data":"2528605dc2ee759014314217bc33a4b9311bfb4874ac4288f66b4c65a6e048ba"} Jan 26 21:11:58 crc kubenswrapper[4899]: I0126 21:11:58.288661 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-db-sync-2dc4c" Jan 26 21:11:58 crc kubenswrapper[4899]: I0126 21:11:58.380219 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8596bee1-b6cc-499d-b944-7e6732399d9b-config-data\") pod \"8596bee1-b6cc-499d-b944-7e6732399d9b\" (UID: \"8596bee1-b6cc-499d-b944-7e6732399d9b\") " Jan 26 21:11:58 crc kubenswrapper[4899]: I0126 21:11:58.380428 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h99g8\" (UniqueName: \"kubernetes.io/projected/8596bee1-b6cc-499d-b944-7e6732399d9b-kube-api-access-h99g8\") pod \"8596bee1-b6cc-499d-b944-7e6732399d9b\" (UID: \"8596bee1-b6cc-499d-b944-7e6732399d9b\") " Jan 26 21:11:58 crc kubenswrapper[4899]: I0126 21:11:58.386156 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8596bee1-b6cc-499d-b944-7e6732399d9b-kube-api-access-h99g8" (OuterVolumeSpecName: "kube-api-access-h99g8") pod "8596bee1-b6cc-499d-b944-7e6732399d9b" (UID: "8596bee1-b6cc-499d-b944-7e6732399d9b"). InnerVolumeSpecName "kube-api-access-h99g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:11:58 crc kubenswrapper[4899]: I0126 21:11:58.413381 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8596bee1-b6cc-499d-b944-7e6732399d9b-config-data" (OuterVolumeSpecName: "config-data") pod "8596bee1-b6cc-499d-b944-7e6732399d9b" (UID: "8596bee1-b6cc-499d-b944-7e6732399d9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:11:58 crc kubenswrapper[4899]: I0126 21:11:58.482046 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h99g8\" (UniqueName: \"kubernetes.io/projected/8596bee1-b6cc-499d-b944-7e6732399d9b-kube-api-access-h99g8\") on node \"crc\" DevicePath \"\"" Jan 26 21:11:58 crc kubenswrapper[4899]: I0126 21:11:58.482081 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8596bee1-b6cc-499d-b944-7e6732399d9b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.013713 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-db-sync-2dc4c" event={"ID":"8596bee1-b6cc-499d-b944-7e6732399d9b","Type":"ContainerDied","Data":"f62705d83fb1b1917d282992d552206a1ab7aeb23d3b56485c9f60d1e0eb28b2"} Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.013766 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f62705d83fb1b1917d282992d552206a1ab7aeb23d3b56485c9f60d1e0eb28b2" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.013852 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-db-sync-2dc4c" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.218448 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/keystone-bootstrap-6jzcd"] Jan 26 21:11:59 crc kubenswrapper[4899]: E0126 21:11:59.219227 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a2c7a03-b093-4607-9583-30dff2d55ad4" containerName="extract-utilities" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.219251 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a2c7a03-b093-4607-9583-30dff2d55ad4" containerName="extract-utilities" Jan 26 21:11:59 crc kubenswrapper[4899]: E0126 21:11:59.219262 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8596bee1-b6cc-499d-b944-7e6732399d9b" containerName="keystone-db-sync" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.219271 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="8596bee1-b6cc-499d-b944-7e6732399d9b" containerName="keystone-db-sync" Jan 26 21:11:59 crc kubenswrapper[4899]: E0126 21:11:59.219302 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a2c7a03-b093-4607-9583-30dff2d55ad4" containerName="extract-content" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.219310 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a2c7a03-b093-4607-9583-30dff2d55ad4" containerName="extract-content" Jan 26 21:11:59 crc kubenswrapper[4899]: E0126 21:11:59.219321 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a2c7a03-b093-4607-9583-30dff2d55ad4" containerName="registry-server" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.219330 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a2c7a03-b093-4607-9583-30dff2d55ad4" containerName="registry-server" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.219573 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a2c7a03-b093-4607-9583-30dff2d55ad4" containerName="registry-server" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.219591 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="8596bee1-b6cc-499d-b944-7e6732399d9b" containerName="keystone-db-sync" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.220140 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.223223 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"keystone" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.223318 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"keystone-scripts" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.223716 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"osp-secret" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.223862 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"keystone-keystone-dockercfg-4bxqr" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.224051 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"keystone-config-data" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.224045 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/keystone-bootstrap-6jzcd"] Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.296875 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-fernet-keys\") pod \"keystone-bootstrap-6jzcd\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.296948 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx7fj\" (UniqueName: \"kubernetes.io/projected/7ab177f9-aee8-4921-b60d-c085a99964f4-kube-api-access-nx7fj\") pod \"keystone-bootstrap-6jzcd\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.296979 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-scripts\") pod \"keystone-bootstrap-6jzcd\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.297006 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-credential-keys\") pod \"keystone-bootstrap-6jzcd\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.297051 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-config-data\") pod \"keystone-bootstrap-6jzcd\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.398618 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-fernet-keys\") pod \"keystone-bootstrap-6jzcd\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.398679 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx7fj\" (UniqueName: \"kubernetes.io/projected/7ab177f9-aee8-4921-b60d-c085a99964f4-kube-api-access-nx7fj\") pod \"keystone-bootstrap-6jzcd\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.398701 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-scripts\") pod \"keystone-bootstrap-6jzcd\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.398721 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-credential-keys\") pod \"keystone-bootstrap-6jzcd\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.398743 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-config-data\") pod \"keystone-bootstrap-6jzcd\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.403970 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-scripts\") pod \"keystone-bootstrap-6jzcd\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.404386 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-credential-keys\") pod \"keystone-bootstrap-6jzcd\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.404919 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-fernet-keys\") pod \"keystone-bootstrap-6jzcd\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.415623 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-config-data\") pod \"keystone-bootstrap-6jzcd\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.419000 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx7fj\" (UniqueName: \"kubernetes.io/projected/7ab177f9-aee8-4921-b60d-c085a99964f4-kube-api-access-nx7fj\") pod \"keystone-bootstrap-6jzcd\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.547961 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:11:59 crc kubenswrapper[4899]: I0126 21:11:59.987150 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/keystone-bootstrap-6jzcd"] Jan 26 21:12:00 crc kubenswrapper[4899]: I0126 21:12:00.021920 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" event={"ID":"7ab177f9-aee8-4921-b60d-c085a99964f4","Type":"ContainerStarted","Data":"af357f9a96e6c3c749c7085390288cceca2332e7f195c1150974254bb7543bc6"} Jan 26 21:12:02 crc kubenswrapper[4899]: I0126 21:12:02.066231 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" event={"ID":"7ab177f9-aee8-4921-b60d-c085a99964f4","Type":"ContainerStarted","Data":"a1f6d8b9cd8e4346edb9826d736ffd19197b9c4573847353c5e8ed20e06d6443"} Jan 26 21:12:02 crc kubenswrapper[4899]: I0126 21:12:02.103347 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" podStartSLOduration=3.103318984 podStartE2EDuration="3.103318984s" podCreationTimestamp="2026-01-26 21:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:12:02.094869058 +0000 UTC m=+1011.476457125" watchObservedRunningTime="2026-01-26 21:12:02.103318984 +0000 UTC m=+1011.484907041" Jan 26 21:12:04 crc kubenswrapper[4899]: I0126 21:12:04.081514 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/ceph" event={"ID":"951664be-c618-4a13-8265-32cf5a4d7cf1","Type":"ContainerStarted","Data":"f2260c1878f0f80c6406c66bf8626f4036e6bab59943aaea1d5243720753b490"} Jan 26 21:12:04 crc kubenswrapper[4899]: I0126 21:12:04.103113 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/ceph" podStartSLOduration=2.853437301 podStartE2EDuration="49.103090915s" podCreationTimestamp="2026-01-26 21:11:15 +0000 UTC" firstStartedPulling="2026-01-26 21:11:16.445223136 +0000 UTC m=+965.826811173" lastFinishedPulling="2026-01-26 21:12:02.69487675 +0000 UTC m=+1012.076464787" observedRunningTime="2026-01-26 21:12:04.097597242 +0000 UTC m=+1013.479185279" watchObservedRunningTime="2026-01-26 21:12:04.103090915 +0000 UTC m=+1013.484678952" Jan 26 21:12:06 crc kubenswrapper[4899]: I0126 21:12:06.096622 4899 generic.go:334] "Generic (PLEG): container finished" podID="7ab177f9-aee8-4921-b60d-c085a99964f4" containerID="a1f6d8b9cd8e4346edb9826d736ffd19197b9c4573847353c5e8ed20e06d6443" exitCode=0 Jan 26 21:12:06 crc kubenswrapper[4899]: I0126 21:12:06.096723 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" event={"ID":"7ab177f9-aee8-4921-b60d-c085a99964f4","Type":"ContainerDied","Data":"a1f6d8b9cd8e4346edb9826d736ffd19197b9c4573847353c5e8ed20e06d6443"} Jan 26 21:12:07 crc kubenswrapper[4899]: I0126 21:12:07.427758 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:12:07 crc kubenswrapper[4899]: I0126 21:12:07.617083 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-config-data\") pod \"7ab177f9-aee8-4921-b60d-c085a99964f4\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " Jan 26 21:12:07 crc kubenswrapper[4899]: I0126 21:12:07.617258 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx7fj\" (UniqueName: \"kubernetes.io/projected/7ab177f9-aee8-4921-b60d-c085a99964f4-kube-api-access-nx7fj\") pod \"7ab177f9-aee8-4921-b60d-c085a99964f4\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " Jan 26 21:12:07 crc kubenswrapper[4899]: I0126 21:12:07.617289 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-fernet-keys\") pod \"7ab177f9-aee8-4921-b60d-c085a99964f4\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " Jan 26 21:12:07 crc kubenswrapper[4899]: I0126 21:12:07.617317 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-scripts\") pod \"7ab177f9-aee8-4921-b60d-c085a99964f4\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " Jan 26 21:12:07 crc kubenswrapper[4899]: I0126 21:12:07.617339 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-credential-keys\") pod \"7ab177f9-aee8-4921-b60d-c085a99964f4\" (UID: \"7ab177f9-aee8-4921-b60d-c085a99964f4\") " Jan 26 21:12:07 crc kubenswrapper[4899]: I0126 21:12:07.622515 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-scripts" (OuterVolumeSpecName: "scripts") pod "7ab177f9-aee8-4921-b60d-c085a99964f4" (UID: "7ab177f9-aee8-4921-b60d-c085a99964f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:12:07 crc kubenswrapper[4899]: I0126 21:12:07.623292 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab177f9-aee8-4921-b60d-c085a99964f4-kube-api-access-nx7fj" (OuterVolumeSpecName: "kube-api-access-nx7fj") pod "7ab177f9-aee8-4921-b60d-c085a99964f4" (UID: "7ab177f9-aee8-4921-b60d-c085a99964f4"). InnerVolumeSpecName "kube-api-access-nx7fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:12:07 crc kubenswrapper[4899]: I0126 21:12:07.631524 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7ab177f9-aee8-4921-b60d-c085a99964f4" (UID: "7ab177f9-aee8-4921-b60d-c085a99964f4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:12:07 crc kubenswrapper[4899]: I0126 21:12:07.634181 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-config-data" (OuterVolumeSpecName: "config-data") pod "7ab177f9-aee8-4921-b60d-c085a99964f4" (UID: "7ab177f9-aee8-4921-b60d-c085a99964f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:12:07 crc kubenswrapper[4899]: I0126 21:12:07.635826 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7ab177f9-aee8-4921-b60d-c085a99964f4" (UID: "7ab177f9-aee8-4921-b60d-c085a99964f4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:12:07 crc kubenswrapper[4899]: I0126 21:12:07.718795 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx7fj\" (UniqueName: \"kubernetes.io/projected/7ab177f9-aee8-4921-b60d-c085a99964f4-kube-api-access-nx7fj\") on node \"crc\" DevicePath \"\"" Jan 26 21:12:07 crc kubenswrapper[4899]: I0126 21:12:07.718838 4899 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 21:12:07 crc kubenswrapper[4899]: I0126 21:12:07.718851 4899 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:12:07 crc kubenswrapper[4899]: I0126 21:12:07.718863 4899 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 21:12:07 crc kubenswrapper[4899]: I0126 21:12:07.718875 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab177f9-aee8-4921-b60d-c085a99964f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.112554 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" event={"ID":"7ab177f9-aee8-4921-b60d-c085a99964f4","Type":"ContainerDied","Data":"af357f9a96e6c3c749c7085390288cceca2332e7f195c1150974254bb7543bc6"} Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.112604 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af357f9a96e6c3c749c7085390288cceca2332e7f195c1150974254bb7543bc6" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.112664 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-bootstrap-6jzcd" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.200155 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/keystone-59fbff8547-2xlqq"] Jan 26 21:12:08 crc kubenswrapper[4899]: E0126 21:12:08.200558 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab177f9-aee8-4921-b60d-c085a99964f4" containerName="keystone-bootstrap" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.200583 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab177f9-aee8-4921-b60d-c085a99964f4" containerName="keystone-bootstrap" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.200750 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab177f9-aee8-4921-b60d-c085a99964f4" containerName="keystone-bootstrap" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.201396 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.204067 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"keystone-scripts" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.204484 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"keystone" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.205060 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"keystone-keystone-dockercfg-4bxqr" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.205394 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"keystone-config-data" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.213993 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/keystone-59fbff8547-2xlqq"] Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.237456 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-scripts\") pod \"keystone-59fbff8547-2xlqq\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.237525 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-config-data\") pod \"keystone-59fbff8547-2xlqq\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.237567 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qz65\" (UniqueName: \"kubernetes.io/projected/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-kube-api-access-4qz65\") pod \"keystone-59fbff8547-2xlqq\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.237689 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-credential-keys\") pod \"keystone-59fbff8547-2xlqq\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.237732 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-fernet-keys\") pod \"keystone-59fbff8547-2xlqq\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.242431 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d6cff"] Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.243884 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.261661 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d6cff"] Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.339077 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4380568-2b2e-44ac-9bc6-65af98ae1496-catalog-content\") pod \"certified-operators-d6cff\" (UID: \"f4380568-2b2e-44ac-9bc6-65af98ae1496\") " pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.339141 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-scripts\") pod \"keystone-59fbff8547-2xlqq\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.339167 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4380568-2b2e-44ac-9bc6-65af98ae1496-utilities\") pod \"certified-operators-d6cff\" (UID: \"f4380568-2b2e-44ac-9bc6-65af98ae1496\") " pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.339194 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-config-data\") pod \"keystone-59fbff8547-2xlqq\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.339222 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qz65\" (UniqueName: \"kubernetes.io/projected/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-kube-api-access-4qz65\") pod \"keystone-59fbff8547-2xlqq\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.339442 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-credential-keys\") pod \"keystone-59fbff8547-2xlqq\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.339506 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-fernet-keys\") pod \"keystone-59fbff8547-2xlqq\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.339542 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cckwq\" (UniqueName: \"kubernetes.io/projected/f4380568-2b2e-44ac-9bc6-65af98ae1496-kube-api-access-cckwq\") pod \"certified-operators-d6cff\" (UID: \"f4380568-2b2e-44ac-9bc6-65af98ae1496\") " pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.343515 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-scripts\") pod \"keystone-59fbff8547-2xlqq\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.343887 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-credential-keys\") pod \"keystone-59fbff8547-2xlqq\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.344024 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-fernet-keys\") pod \"keystone-59fbff8547-2xlqq\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.344798 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-config-data\") pod \"keystone-59fbff8547-2xlqq\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.356111 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qz65\" (UniqueName: \"kubernetes.io/projected/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-kube-api-access-4qz65\") pod \"keystone-59fbff8547-2xlqq\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.440390 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4380568-2b2e-44ac-9bc6-65af98ae1496-catalog-content\") pod \"certified-operators-d6cff\" (UID: \"f4380568-2b2e-44ac-9bc6-65af98ae1496\") " pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.440440 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4380568-2b2e-44ac-9bc6-65af98ae1496-utilities\") pod \"certified-operators-d6cff\" (UID: \"f4380568-2b2e-44ac-9bc6-65af98ae1496\") " pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.440510 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cckwq\" (UniqueName: \"kubernetes.io/projected/f4380568-2b2e-44ac-9bc6-65af98ae1496-kube-api-access-cckwq\") pod \"certified-operators-d6cff\" (UID: \"f4380568-2b2e-44ac-9bc6-65af98ae1496\") " pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.441067 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4380568-2b2e-44ac-9bc6-65af98ae1496-utilities\") pod \"certified-operators-d6cff\" (UID: \"f4380568-2b2e-44ac-9bc6-65af98ae1496\") " pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.441381 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4380568-2b2e-44ac-9bc6-65af98ae1496-catalog-content\") pod \"certified-operators-d6cff\" (UID: \"f4380568-2b2e-44ac-9bc6-65af98ae1496\") " pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.461354 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cckwq\" (UniqueName: \"kubernetes.io/projected/f4380568-2b2e-44ac-9bc6-65af98ae1496-kube-api-access-cckwq\") pod \"certified-operators-d6cff\" (UID: \"f4380568-2b2e-44ac-9bc6-65af98ae1496\") " pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.517688 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.580360 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:08 crc kubenswrapper[4899]: I0126 21:12:08.776204 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/keystone-59fbff8547-2xlqq"] Jan 26 21:12:09 crc kubenswrapper[4899]: I0126 21:12:09.117534 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d6cff"] Jan 26 21:12:09 crc kubenswrapper[4899]: I0126 21:12:09.120334 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" event={"ID":"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0","Type":"ContainerStarted","Data":"a4db775aa21655e096c95ca87b1f655c0fe7856652d8edbe1ccddbea0d6a8122"} Jan 26 21:12:09 crc kubenswrapper[4899]: I0126 21:12:09.120376 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" event={"ID":"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0","Type":"ContainerStarted","Data":"b5bd7731dfa36bf9e861ca1d74a4d004d9b18a847cee638d8746195ba7c0d1a5"} Jan 26 21:12:09 crc kubenswrapper[4899]: I0126 21:12:09.121230 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:10 crc kubenswrapper[4899]: I0126 21:12:10.129298 4899 generic.go:334] "Generic (PLEG): container finished" podID="f4380568-2b2e-44ac-9bc6-65af98ae1496" containerID="3e7c1ca600c698a5391e234dcd31169f6ae8f7d3b88725766290132e4ce78464" exitCode=0 Jan 26 21:12:10 crc kubenswrapper[4899]: I0126 21:12:10.129404 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6cff" event={"ID":"f4380568-2b2e-44ac-9bc6-65af98ae1496","Type":"ContainerDied","Data":"3e7c1ca600c698a5391e234dcd31169f6ae8f7d3b88725766290132e4ce78464"} Jan 26 21:12:10 crc kubenswrapper[4899]: I0126 21:12:10.129853 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6cff" event={"ID":"f4380568-2b2e-44ac-9bc6-65af98ae1496","Type":"ContainerStarted","Data":"13ab370078a1ba79aa25c92907673daaf7e8e53d3f0594ecdba43556ffcc7238"} Jan 26 21:12:10 crc kubenswrapper[4899]: I0126 21:12:10.152876 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" podStartSLOduration=2.152853677 podStartE2EDuration="2.152853677s" podCreationTimestamp="2026-01-26 21:12:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:12:09.142779541 +0000 UTC m=+1018.524367578" watchObservedRunningTime="2026-01-26 21:12:10.152853677 +0000 UTC m=+1019.534441704" Jan 26 21:12:11 crc kubenswrapper[4899]: I0126 21:12:11.141261 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6cff" event={"ID":"f4380568-2b2e-44ac-9bc6-65af98ae1496","Type":"ContainerStarted","Data":"8e561176e5bb44e5ba0e7a9477590369294b90f90b1e18329b25f68ac558a131"} Jan 26 21:12:12 crc kubenswrapper[4899]: I0126 21:12:12.147877 4899 generic.go:334] "Generic (PLEG): container finished" podID="f4380568-2b2e-44ac-9bc6-65af98ae1496" containerID="8e561176e5bb44e5ba0e7a9477590369294b90f90b1e18329b25f68ac558a131" exitCode=0 Jan 26 21:12:12 crc kubenswrapper[4899]: I0126 21:12:12.147960 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6cff" event={"ID":"f4380568-2b2e-44ac-9bc6-65af98ae1496","Type":"ContainerDied","Data":"8e561176e5bb44e5ba0e7a9477590369294b90f90b1e18329b25f68ac558a131"} Jan 26 21:12:13 crc kubenswrapper[4899]: I0126 21:12:13.157595 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6cff" event={"ID":"f4380568-2b2e-44ac-9bc6-65af98ae1496","Type":"ContainerStarted","Data":"824e384d1885db9606ddb3ce9a2adb9a9e17c474645ca4f4cc66fb901ebc5f9f"} Jan 26 21:12:13 crc kubenswrapper[4899]: I0126 21:12:13.175127 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d6cff" podStartSLOduration=2.709801148 podStartE2EDuration="5.175104699s" podCreationTimestamp="2026-01-26 21:12:08 +0000 UTC" firstStartedPulling="2026-01-26 21:12:10.131290855 +0000 UTC m=+1019.512878902" lastFinishedPulling="2026-01-26 21:12:12.596594416 +0000 UTC m=+1021.978182453" observedRunningTime="2026-01-26 21:12:13.172306301 +0000 UTC m=+1022.553894338" watchObservedRunningTime="2026-01-26 21:12:13.175104699 +0000 UTC m=+1022.556692736" Jan 26 21:12:18 crc kubenswrapper[4899]: I0126 21:12:18.581351 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:18 crc kubenswrapper[4899]: I0126 21:12:18.581820 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:18 crc kubenswrapper[4899]: I0126 21:12:18.630622 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:19 crc kubenswrapper[4899]: I0126 21:12:19.250029 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:19 crc kubenswrapper[4899]: I0126 21:12:19.298332 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d6cff"] Jan 26 21:12:21 crc kubenswrapper[4899]: I0126 21:12:21.211888 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d6cff" podUID="f4380568-2b2e-44ac-9bc6-65af98ae1496" containerName="registry-server" containerID="cri-o://824e384d1885db9606ddb3ce9a2adb9a9e17c474645ca4f4cc66fb901ebc5f9f" gracePeriod=2 Jan 26 21:12:22 crc kubenswrapper[4899]: I0126 21:12:22.226470 4899 generic.go:334] "Generic (PLEG): container finished" podID="f4380568-2b2e-44ac-9bc6-65af98ae1496" containerID="824e384d1885db9606ddb3ce9a2adb9a9e17c474645ca4f4cc66fb901ebc5f9f" exitCode=0 Jan 26 21:12:22 crc kubenswrapper[4899]: I0126 21:12:22.226517 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6cff" event={"ID":"f4380568-2b2e-44ac-9bc6-65af98ae1496","Type":"ContainerDied","Data":"824e384d1885db9606ddb3ce9a2adb9a9e17c474645ca4f4cc66fb901ebc5f9f"} Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.177200 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.238248 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6cff" event={"ID":"f4380568-2b2e-44ac-9bc6-65af98ae1496","Type":"ContainerDied","Data":"13ab370078a1ba79aa25c92907673daaf7e8e53d3f0594ecdba43556ffcc7238"} Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.238337 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6cff" Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.238331 4899 scope.go:117] "RemoveContainer" containerID="824e384d1885db9606ddb3ce9a2adb9a9e17c474645ca4f4cc66fb901ebc5f9f" Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.261637 4899 scope.go:117] "RemoveContainer" containerID="8e561176e5bb44e5ba0e7a9477590369294b90f90b1e18329b25f68ac558a131" Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.285739 4899 scope.go:117] "RemoveContainer" containerID="3e7c1ca600c698a5391e234dcd31169f6ae8f7d3b88725766290132e4ce78464" Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.367309 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cckwq\" (UniqueName: \"kubernetes.io/projected/f4380568-2b2e-44ac-9bc6-65af98ae1496-kube-api-access-cckwq\") pod \"f4380568-2b2e-44ac-9bc6-65af98ae1496\" (UID: \"f4380568-2b2e-44ac-9bc6-65af98ae1496\") " Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.367359 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4380568-2b2e-44ac-9bc6-65af98ae1496-utilities\") pod \"f4380568-2b2e-44ac-9bc6-65af98ae1496\" (UID: \"f4380568-2b2e-44ac-9bc6-65af98ae1496\") " Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.367404 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4380568-2b2e-44ac-9bc6-65af98ae1496-catalog-content\") pod \"f4380568-2b2e-44ac-9bc6-65af98ae1496\" (UID: \"f4380568-2b2e-44ac-9bc6-65af98ae1496\") " Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.368443 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4380568-2b2e-44ac-9bc6-65af98ae1496-utilities" (OuterVolumeSpecName: "utilities") pod "f4380568-2b2e-44ac-9bc6-65af98ae1496" (UID: "f4380568-2b2e-44ac-9bc6-65af98ae1496"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.372615 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4380568-2b2e-44ac-9bc6-65af98ae1496-kube-api-access-cckwq" (OuterVolumeSpecName: "kube-api-access-cckwq") pod "f4380568-2b2e-44ac-9bc6-65af98ae1496" (UID: "f4380568-2b2e-44ac-9bc6-65af98ae1496"). InnerVolumeSpecName "kube-api-access-cckwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.408552 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4380568-2b2e-44ac-9bc6-65af98ae1496-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4380568-2b2e-44ac-9bc6-65af98ae1496" (UID: "f4380568-2b2e-44ac-9bc6-65af98ae1496"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.469718 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cckwq\" (UniqueName: \"kubernetes.io/projected/f4380568-2b2e-44ac-9bc6-65af98ae1496-kube-api-access-cckwq\") on node \"crc\" DevicePath \"\"" Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.469772 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4380568-2b2e-44ac-9bc6-65af98ae1496-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.469798 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4380568-2b2e-44ac-9bc6-65af98ae1496-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.579129 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d6cff"] Jan 26 21:12:23 crc kubenswrapper[4899]: I0126 21:12:23.583579 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d6cff"] Jan 26 21:12:24 crc kubenswrapper[4899]: I0126 21:12:24.938630 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4380568-2b2e-44ac-9bc6-65af98ae1496" path="/var/lib/kubelet/pods/f4380568-2b2e-44ac-9bc6-65af98ae1496/volumes" Jan 26 21:12:40 crc kubenswrapper[4899]: I0126 21:12:40.210267 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:12:45 crc kubenswrapper[4899]: E0126 21:12:45.849908 4899 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.22:37592->38.102.83.22:45343: write tcp 38.102.83.22:37592->38.102.83.22:45343: write: broken pipe Jan 26 21:12:57 crc kubenswrapper[4899]: E0126 21:12:57.519888 4899 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.22:55058->38.102.83.22:45343: write tcp 38.102.83.22:55058->38.102.83.22:45343: write: broken pipe Jan 26 21:13:01 crc kubenswrapper[4899]: I0126 21:13:01.235367 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-index-5pdrf"] Jan 26 21:13:01 crc kubenswrapper[4899]: E0126 21:13:01.235895 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4380568-2b2e-44ac-9bc6-65af98ae1496" containerName="extract-utilities" Jan 26 21:13:01 crc kubenswrapper[4899]: I0126 21:13:01.235909 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4380568-2b2e-44ac-9bc6-65af98ae1496" containerName="extract-utilities" Jan 26 21:13:01 crc kubenswrapper[4899]: E0126 21:13:01.235921 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4380568-2b2e-44ac-9bc6-65af98ae1496" containerName="extract-content" Jan 26 21:13:01 crc kubenswrapper[4899]: I0126 21:13:01.235946 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4380568-2b2e-44ac-9bc6-65af98ae1496" containerName="extract-content" Jan 26 21:13:01 crc kubenswrapper[4899]: E0126 21:13:01.235958 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4380568-2b2e-44ac-9bc6-65af98ae1496" containerName="registry-server" Jan 26 21:13:01 crc kubenswrapper[4899]: I0126 21:13:01.235966 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4380568-2b2e-44ac-9bc6-65af98ae1496" containerName="registry-server" Jan 26 21:13:01 crc kubenswrapper[4899]: I0126 21:13:01.236096 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4380568-2b2e-44ac-9bc6-65af98ae1496" containerName="registry-server" Jan 26 21:13:01 crc kubenswrapper[4899]: I0126 21:13:01.236595 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-index-5pdrf" Jan 26 21:13:01 crc kubenswrapper[4899]: I0126 21:13:01.238748 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-index-dockercfg-c5dct" Jan 26 21:13:01 crc kubenswrapper[4899]: I0126 21:13:01.250117 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-index-5pdrf"] Jan 26 21:13:01 crc kubenswrapper[4899]: I0126 21:13:01.399941 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrhcj\" (UniqueName: \"kubernetes.io/projected/b8304b44-4c79-4df6-bb7a-16002e45f486-kube-api-access-qrhcj\") pod \"manila-operator-index-5pdrf\" (UID: \"b8304b44-4c79-4df6-bb7a-16002e45f486\") " pod="openstack-operators/manila-operator-index-5pdrf" Jan 26 21:13:01 crc kubenswrapper[4899]: I0126 21:13:01.501418 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrhcj\" (UniqueName: \"kubernetes.io/projected/b8304b44-4c79-4df6-bb7a-16002e45f486-kube-api-access-qrhcj\") pod \"manila-operator-index-5pdrf\" (UID: \"b8304b44-4c79-4df6-bb7a-16002e45f486\") " pod="openstack-operators/manila-operator-index-5pdrf" Jan 26 21:13:01 crc kubenswrapper[4899]: I0126 21:13:01.519547 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrhcj\" (UniqueName: \"kubernetes.io/projected/b8304b44-4c79-4df6-bb7a-16002e45f486-kube-api-access-qrhcj\") pod \"manila-operator-index-5pdrf\" (UID: \"b8304b44-4c79-4df6-bb7a-16002e45f486\") " pod="openstack-operators/manila-operator-index-5pdrf" Jan 26 21:13:01 crc kubenswrapper[4899]: I0126 21:13:01.614909 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-index-5pdrf" Jan 26 21:13:02 crc kubenswrapper[4899]: I0126 21:13:02.192521 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-index-5pdrf"] Jan 26 21:13:02 crc kubenswrapper[4899]: I0126 21:13:02.533270 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-index-5pdrf" event={"ID":"b8304b44-4c79-4df6-bb7a-16002e45f486","Type":"ContainerStarted","Data":"c4111336a5f4394ed9d3718e8eb6bfdf776768254cc640a8677eb7755b64ae2c"} Jan 26 21:13:04 crc kubenswrapper[4899]: I0126 21:13:04.552289 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-index-5pdrf" event={"ID":"b8304b44-4c79-4df6-bb7a-16002e45f486","Type":"ContainerStarted","Data":"f5554a84060b21fd54d0664f30780700ac7c2ac2e5a75b656de33327127b74b8"} Jan 26 21:13:04 crc kubenswrapper[4899]: I0126 21:13:04.572986 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-index-5pdrf" podStartSLOduration=1.94496058 podStartE2EDuration="3.572966568s" podCreationTimestamp="2026-01-26 21:13:01 +0000 UTC" firstStartedPulling="2026-01-26 21:13:02.195460946 +0000 UTC m=+1071.577048983" lastFinishedPulling="2026-01-26 21:13:03.823466934 +0000 UTC m=+1073.205054971" observedRunningTime="2026-01-26 21:13:04.570476679 +0000 UTC m=+1073.952064726" watchObservedRunningTime="2026-01-26 21:13:04.572966568 +0000 UTC m=+1073.954554605" Jan 26 21:13:05 crc kubenswrapper[4899]: I0126 21:13:05.436627 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/manila-operator-index-5pdrf"] Jan 26 21:13:06 crc kubenswrapper[4899]: I0126 21:13:06.036377 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-index-c9zzr"] Jan 26 21:13:06 crc kubenswrapper[4899]: I0126 21:13:06.037462 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-index-c9zzr" Jan 26 21:13:06 crc kubenswrapper[4899]: I0126 21:13:06.043479 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-index-c9zzr"] Jan 26 21:13:06 crc kubenswrapper[4899]: I0126 21:13:06.188660 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcj4g\" (UniqueName: \"kubernetes.io/projected/39906e7d-94ba-4997-8e46-27d2f18888c9-kube-api-access-vcj4g\") pod \"manila-operator-index-c9zzr\" (UID: \"39906e7d-94ba-4997-8e46-27d2f18888c9\") " pod="openstack-operators/manila-operator-index-c9zzr" Jan 26 21:13:06 crc kubenswrapper[4899]: I0126 21:13:06.291366 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcj4g\" (UniqueName: \"kubernetes.io/projected/39906e7d-94ba-4997-8e46-27d2f18888c9-kube-api-access-vcj4g\") pod \"manila-operator-index-c9zzr\" (UID: \"39906e7d-94ba-4997-8e46-27d2f18888c9\") " pod="openstack-operators/manila-operator-index-c9zzr" Jan 26 21:13:06 crc kubenswrapper[4899]: I0126 21:13:06.310032 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcj4g\" (UniqueName: \"kubernetes.io/projected/39906e7d-94ba-4997-8e46-27d2f18888c9-kube-api-access-vcj4g\") pod \"manila-operator-index-c9zzr\" (UID: \"39906e7d-94ba-4997-8e46-27d2f18888c9\") " pod="openstack-operators/manila-operator-index-c9zzr" Jan 26 21:13:06 crc kubenswrapper[4899]: I0126 21:13:06.353735 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-index-c9zzr" Jan 26 21:13:06 crc kubenswrapper[4899]: I0126 21:13:06.569489 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/manila-operator-index-5pdrf" podUID="b8304b44-4c79-4df6-bb7a-16002e45f486" containerName="registry-server" containerID="cri-o://f5554a84060b21fd54d0664f30780700ac7c2ac2e5a75b656de33327127b74b8" gracePeriod=2 Jan 26 21:13:06 crc kubenswrapper[4899]: I0126 21:13:06.799428 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-index-c9zzr"] Jan 26 21:13:06 crc kubenswrapper[4899]: I0126 21:13:06.975376 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-index-5pdrf" Jan 26 21:13:07 crc kubenswrapper[4899]: I0126 21:13:07.103575 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrhcj\" (UniqueName: \"kubernetes.io/projected/b8304b44-4c79-4df6-bb7a-16002e45f486-kube-api-access-qrhcj\") pod \"b8304b44-4c79-4df6-bb7a-16002e45f486\" (UID: \"b8304b44-4c79-4df6-bb7a-16002e45f486\") " Jan 26 21:13:07 crc kubenswrapper[4899]: I0126 21:13:07.108696 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8304b44-4c79-4df6-bb7a-16002e45f486-kube-api-access-qrhcj" (OuterVolumeSpecName: "kube-api-access-qrhcj") pod "b8304b44-4c79-4df6-bb7a-16002e45f486" (UID: "b8304b44-4c79-4df6-bb7a-16002e45f486"). InnerVolumeSpecName "kube-api-access-qrhcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:13:07 crc kubenswrapper[4899]: I0126 21:13:07.207206 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrhcj\" (UniqueName: \"kubernetes.io/projected/b8304b44-4c79-4df6-bb7a-16002e45f486-kube-api-access-qrhcj\") on node \"crc\" DevicePath \"\"" Jan 26 21:13:07 crc kubenswrapper[4899]: I0126 21:13:07.577648 4899 generic.go:334] "Generic (PLEG): container finished" podID="b8304b44-4c79-4df6-bb7a-16002e45f486" containerID="f5554a84060b21fd54d0664f30780700ac7c2ac2e5a75b656de33327127b74b8" exitCode=0 Jan 26 21:13:07 crc kubenswrapper[4899]: I0126 21:13:07.577717 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-index-5pdrf" event={"ID":"b8304b44-4c79-4df6-bb7a-16002e45f486","Type":"ContainerDied","Data":"f5554a84060b21fd54d0664f30780700ac7c2ac2e5a75b656de33327127b74b8"} Jan 26 21:13:07 crc kubenswrapper[4899]: I0126 21:13:07.577744 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-index-5pdrf" Jan 26 21:13:07 crc kubenswrapper[4899]: I0126 21:13:07.577760 4899 scope.go:117] "RemoveContainer" containerID="f5554a84060b21fd54d0664f30780700ac7c2ac2e5a75b656de33327127b74b8" Jan 26 21:13:07 crc kubenswrapper[4899]: I0126 21:13:07.577746 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-index-5pdrf" event={"ID":"b8304b44-4c79-4df6-bb7a-16002e45f486","Type":"ContainerDied","Data":"c4111336a5f4394ed9d3718e8eb6bfdf776768254cc640a8677eb7755b64ae2c"} Jan 26 21:13:07 crc kubenswrapper[4899]: I0126 21:13:07.580995 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-index-c9zzr" event={"ID":"39906e7d-94ba-4997-8e46-27d2f18888c9","Type":"ContainerStarted","Data":"8d253ac8ec077dd0ff5ae0a4f62b9d9eb1d6356b0db70b336fb80a5bb9036b72"} Jan 26 21:13:07 crc kubenswrapper[4899]: I0126 21:13:07.581034 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-index-c9zzr" event={"ID":"39906e7d-94ba-4997-8e46-27d2f18888c9","Type":"ContainerStarted","Data":"cec5004d3b5ee58ca1b3b2885d24bf46474b03efb7161899abd71a6432a7b546"} Jan 26 21:13:07 crc kubenswrapper[4899]: I0126 21:13:07.604447 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-index-c9zzr" podStartSLOduration=1.554162235 podStartE2EDuration="1.604424398s" podCreationTimestamp="2026-01-26 21:13:06 +0000 UTC" firstStartedPulling="2026-01-26 21:13:06.813453856 +0000 UTC m=+1076.195041893" lastFinishedPulling="2026-01-26 21:13:06.863716019 +0000 UTC m=+1076.245304056" observedRunningTime="2026-01-26 21:13:07.601051564 +0000 UTC m=+1076.982639621" watchObservedRunningTime="2026-01-26 21:13:07.604424398 +0000 UTC m=+1076.986012455" Jan 26 21:13:07 crc kubenswrapper[4899]: I0126 21:13:07.623575 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/manila-operator-index-5pdrf"] Jan 26 21:13:07 crc kubenswrapper[4899]: I0126 21:13:07.624865 4899 scope.go:117] "RemoveContainer" containerID="f5554a84060b21fd54d0664f30780700ac7c2ac2e5a75b656de33327127b74b8" Jan 26 21:13:07 crc kubenswrapper[4899]: E0126 21:13:07.626002 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5554a84060b21fd54d0664f30780700ac7c2ac2e5a75b656de33327127b74b8\": container with ID starting with f5554a84060b21fd54d0664f30780700ac7c2ac2e5a75b656de33327127b74b8 not found: ID does not exist" containerID="f5554a84060b21fd54d0664f30780700ac7c2ac2e5a75b656de33327127b74b8" Jan 26 21:13:07 crc kubenswrapper[4899]: I0126 21:13:07.626045 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5554a84060b21fd54d0664f30780700ac7c2ac2e5a75b656de33327127b74b8"} err="failed to get container status \"f5554a84060b21fd54d0664f30780700ac7c2ac2e5a75b656de33327127b74b8\": rpc error: code = NotFound desc = could not find container \"f5554a84060b21fd54d0664f30780700ac7c2ac2e5a75b656de33327127b74b8\": container with ID starting with f5554a84060b21fd54d0664f30780700ac7c2ac2e5a75b656de33327127b74b8 not found: ID does not exist" Jan 26 21:13:07 crc kubenswrapper[4899]: I0126 21:13:07.628408 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/manila-operator-index-5pdrf"] Jan 26 21:13:08 crc kubenswrapper[4899]: I0126 21:13:08.938375 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8304b44-4c79-4df6-bb7a-16002e45f486" path="/var/lib/kubelet/pods/b8304b44-4c79-4df6-bb7a-16002e45f486/volumes" Jan 26 21:13:16 crc kubenswrapper[4899]: I0126 21:13:16.354865 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-index-c9zzr" Jan 26 21:13:16 crc kubenswrapper[4899]: I0126 21:13:16.356084 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/manila-operator-index-c9zzr" Jan 26 21:13:16 crc kubenswrapper[4899]: I0126 21:13:16.379863 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/manila-operator-index-c9zzr" Jan 26 21:13:16 crc kubenswrapper[4899]: I0126 21:13:16.689498 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-index-c9zzr" Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.096249 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6"] Jan 26 21:13:19 crc kubenswrapper[4899]: E0126 21:13:19.097204 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8304b44-4c79-4df6-bb7a-16002e45f486" containerName="registry-server" Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.097302 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8304b44-4c79-4df6-bb7a-16002e45f486" containerName="registry-server" Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.097562 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8304b44-4c79-4df6-bb7a-16002e45f486" containerName="registry-server" Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.098762 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.105063 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6"] Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.123865 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-44wdn" Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.254201 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nkrd\" (UniqueName: \"kubernetes.io/projected/431633bb-098b-4392-908c-d844fc2a9557-kube-api-access-8nkrd\") pod \"9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6\" (UID: \"431633bb-098b-4392-908c-d844fc2a9557\") " pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.254720 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/431633bb-098b-4392-908c-d844fc2a9557-bundle\") pod \"9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6\" (UID: \"431633bb-098b-4392-908c-d844fc2a9557\") " pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.254879 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/431633bb-098b-4392-908c-d844fc2a9557-util\") pod \"9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6\" (UID: \"431633bb-098b-4392-908c-d844fc2a9557\") " pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.355606 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/431633bb-098b-4392-908c-d844fc2a9557-bundle\") pod \"9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6\" (UID: \"431633bb-098b-4392-908c-d844fc2a9557\") " pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.355680 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/431633bb-098b-4392-908c-d844fc2a9557-util\") pod \"9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6\" (UID: \"431633bb-098b-4392-908c-d844fc2a9557\") " pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.355737 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nkrd\" (UniqueName: \"kubernetes.io/projected/431633bb-098b-4392-908c-d844fc2a9557-kube-api-access-8nkrd\") pod \"9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6\" (UID: \"431633bb-098b-4392-908c-d844fc2a9557\") " pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.356497 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/431633bb-098b-4392-908c-d844fc2a9557-bundle\") pod \"9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6\" (UID: \"431633bb-098b-4392-908c-d844fc2a9557\") " pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.356578 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/431633bb-098b-4392-908c-d844fc2a9557-util\") pod \"9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6\" (UID: \"431633bb-098b-4392-908c-d844fc2a9557\") " pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.386512 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nkrd\" (UniqueName: \"kubernetes.io/projected/431633bb-098b-4392-908c-d844fc2a9557-kube-api-access-8nkrd\") pod \"9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6\" (UID: \"431633bb-098b-4392-908c-d844fc2a9557\") " pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.445101 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" Jan 26 21:13:19 crc kubenswrapper[4899]: I0126 21:13:19.869102 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6"] Jan 26 21:13:20 crc kubenswrapper[4899]: I0126 21:13:20.672827 4899 generic.go:334] "Generic (PLEG): container finished" podID="431633bb-098b-4392-908c-d844fc2a9557" containerID="8aea9cd4643d28009c3bb744ec5f32fb433ed098cd10d859e4e3026e71e96ac9" exitCode=0 Jan 26 21:13:20 crc kubenswrapper[4899]: I0126 21:13:20.672982 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" event={"ID":"431633bb-098b-4392-908c-d844fc2a9557","Type":"ContainerDied","Data":"8aea9cd4643d28009c3bb744ec5f32fb433ed098cd10d859e4e3026e71e96ac9"} Jan 26 21:13:20 crc kubenswrapper[4899]: I0126 21:13:20.673227 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" event={"ID":"431633bb-098b-4392-908c-d844fc2a9557","Type":"ContainerStarted","Data":"7e92c9b7f194010b70eab708a5a4f73b27d7efdcede9fbb7598f4d35645ca714"} Jan 26 21:13:21 crc kubenswrapper[4899]: I0126 21:13:21.685664 4899 generic.go:334] "Generic (PLEG): container finished" podID="431633bb-098b-4392-908c-d844fc2a9557" containerID="3f6a1501309f4e03724c58de6aec82442a41524ccf2beb9610e16ef341bb4858" exitCode=0 Jan 26 21:13:21 crc kubenswrapper[4899]: I0126 21:13:21.685716 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" event={"ID":"431633bb-098b-4392-908c-d844fc2a9557","Type":"ContainerDied","Data":"3f6a1501309f4e03724c58de6aec82442a41524ccf2beb9610e16ef341bb4858"} Jan 26 21:13:22 crc kubenswrapper[4899]: I0126 21:13:22.699730 4899 generic.go:334] "Generic (PLEG): container finished" podID="431633bb-098b-4392-908c-d844fc2a9557" containerID="88eac7395b04ea1aa8b113b3fe8dfa17b3b137a066beb16947291821671750db" exitCode=0 Jan 26 21:13:22 crc kubenswrapper[4899]: I0126 21:13:22.699872 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" event={"ID":"431633bb-098b-4392-908c-d844fc2a9557","Type":"ContainerDied","Data":"88eac7395b04ea1aa8b113b3fe8dfa17b3b137a066beb16947291821671750db"} Jan 26 21:13:23 crc kubenswrapper[4899]: I0126 21:13:23.963450 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" Jan 26 21:13:24 crc kubenswrapper[4899]: I0126 21:13:24.122832 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nkrd\" (UniqueName: \"kubernetes.io/projected/431633bb-098b-4392-908c-d844fc2a9557-kube-api-access-8nkrd\") pod \"431633bb-098b-4392-908c-d844fc2a9557\" (UID: \"431633bb-098b-4392-908c-d844fc2a9557\") " Jan 26 21:13:24 crc kubenswrapper[4899]: I0126 21:13:24.123210 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/431633bb-098b-4392-908c-d844fc2a9557-bundle\") pod \"431633bb-098b-4392-908c-d844fc2a9557\" (UID: \"431633bb-098b-4392-908c-d844fc2a9557\") " Jan 26 21:13:24 crc kubenswrapper[4899]: I0126 21:13:24.123248 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/431633bb-098b-4392-908c-d844fc2a9557-util\") pod \"431633bb-098b-4392-908c-d844fc2a9557\" (UID: \"431633bb-098b-4392-908c-d844fc2a9557\") " Jan 26 21:13:24 crc kubenswrapper[4899]: I0126 21:13:24.124337 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/431633bb-098b-4392-908c-d844fc2a9557-bundle" (OuterVolumeSpecName: "bundle") pod "431633bb-098b-4392-908c-d844fc2a9557" (UID: "431633bb-098b-4392-908c-d844fc2a9557"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:13:24 crc kubenswrapper[4899]: I0126 21:13:24.131117 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/431633bb-098b-4392-908c-d844fc2a9557-kube-api-access-8nkrd" (OuterVolumeSpecName: "kube-api-access-8nkrd") pod "431633bb-098b-4392-908c-d844fc2a9557" (UID: "431633bb-098b-4392-908c-d844fc2a9557"). InnerVolumeSpecName "kube-api-access-8nkrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:13:24 crc kubenswrapper[4899]: I0126 21:13:24.136380 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/431633bb-098b-4392-908c-d844fc2a9557-util" (OuterVolumeSpecName: "util") pod "431633bb-098b-4392-908c-d844fc2a9557" (UID: "431633bb-098b-4392-908c-d844fc2a9557"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:13:24 crc kubenswrapper[4899]: I0126 21:13:24.225010 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nkrd\" (UniqueName: \"kubernetes.io/projected/431633bb-098b-4392-908c-d844fc2a9557-kube-api-access-8nkrd\") on node \"crc\" DevicePath \"\"" Jan 26 21:13:24 crc kubenswrapper[4899]: I0126 21:13:24.225054 4899 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/431633bb-098b-4392-908c-d844fc2a9557-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 21:13:24 crc kubenswrapper[4899]: I0126 21:13:24.225066 4899 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/431633bb-098b-4392-908c-d844fc2a9557-util\") on node \"crc\" DevicePath \"\"" Jan 26 21:13:24 crc kubenswrapper[4899]: I0126 21:13:24.715670 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" event={"ID":"431633bb-098b-4392-908c-d844fc2a9557","Type":"ContainerDied","Data":"7e92c9b7f194010b70eab708a5a4f73b27d7efdcede9fbb7598f4d35645ca714"} Jan 26 21:13:24 crc kubenswrapper[4899]: I0126 21:13:24.715718 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e92c9b7f194010b70eab708a5a4f73b27d7efdcede9fbb7598f4d35645ca714" Jan 26 21:13:24 crc kubenswrapper[4899]: I0126 21:13:24.715721 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.408549 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-66974747b8-6bs75"] Jan 26 21:13:32 crc kubenswrapper[4899]: E0126 21:13:32.409488 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431633bb-098b-4392-908c-d844fc2a9557" containerName="pull" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.409508 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="431633bb-098b-4392-908c-d844fc2a9557" containerName="pull" Jan 26 21:13:32 crc kubenswrapper[4899]: E0126 21:13:32.409531 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431633bb-098b-4392-908c-d844fc2a9557" containerName="extract" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.409539 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="431633bb-098b-4392-908c-d844fc2a9557" containerName="extract" Jan 26 21:13:32 crc kubenswrapper[4899]: E0126 21:13:32.409564 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431633bb-098b-4392-908c-d844fc2a9557" containerName="util" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.409573 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="431633bb-098b-4392-908c-d844fc2a9557" containerName="util" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.409716 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="431633bb-098b-4392-908c-d844fc2a9557" containerName="extract" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.410338 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.412319 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-service-cert" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.412512 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-c84mk" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.424818 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-66974747b8-6bs75"] Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.551951 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gggdd\" (UniqueName: \"kubernetes.io/projected/7b93a53e-a97b-4250-9524-332e5b65e329-kube-api-access-gggdd\") pod \"manila-operator-controller-manager-66974747b8-6bs75\" (UID: \"7b93a53e-a97b-4250-9524-332e5b65e329\") " pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.552004 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b93a53e-a97b-4250-9524-332e5b65e329-apiservice-cert\") pod \"manila-operator-controller-manager-66974747b8-6bs75\" (UID: \"7b93a53e-a97b-4250-9524-332e5b65e329\") " pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.552133 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b93a53e-a97b-4250-9524-332e5b65e329-webhook-cert\") pod \"manila-operator-controller-manager-66974747b8-6bs75\" (UID: \"7b93a53e-a97b-4250-9524-332e5b65e329\") " pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.653590 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b93a53e-a97b-4250-9524-332e5b65e329-webhook-cert\") pod \"manila-operator-controller-manager-66974747b8-6bs75\" (UID: \"7b93a53e-a97b-4250-9524-332e5b65e329\") " pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.653665 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gggdd\" (UniqueName: \"kubernetes.io/projected/7b93a53e-a97b-4250-9524-332e5b65e329-kube-api-access-gggdd\") pod \"manila-operator-controller-manager-66974747b8-6bs75\" (UID: \"7b93a53e-a97b-4250-9524-332e5b65e329\") " pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.653689 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b93a53e-a97b-4250-9524-332e5b65e329-apiservice-cert\") pod \"manila-operator-controller-manager-66974747b8-6bs75\" (UID: \"7b93a53e-a97b-4250-9524-332e5b65e329\") " pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.668556 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b93a53e-a97b-4250-9524-332e5b65e329-apiservice-cert\") pod \"manila-operator-controller-manager-66974747b8-6bs75\" (UID: \"7b93a53e-a97b-4250-9524-332e5b65e329\") " pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.668567 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b93a53e-a97b-4250-9524-332e5b65e329-webhook-cert\") pod \"manila-operator-controller-manager-66974747b8-6bs75\" (UID: \"7b93a53e-a97b-4250-9524-332e5b65e329\") " pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.671366 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gggdd\" (UniqueName: \"kubernetes.io/projected/7b93a53e-a97b-4250-9524-332e5b65e329-kube-api-access-gggdd\") pod \"manila-operator-controller-manager-66974747b8-6bs75\" (UID: \"7b93a53e-a97b-4250-9524-332e5b65e329\") " pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" Jan 26 21:13:32 crc kubenswrapper[4899]: I0126 21:13:32.738502 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" Jan 26 21:13:33 crc kubenswrapper[4899]: I0126 21:13:33.207284 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-66974747b8-6bs75"] Jan 26 21:13:33 crc kubenswrapper[4899]: I0126 21:13:33.790818 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" event={"ID":"7b93a53e-a97b-4250-9524-332e5b65e329","Type":"ContainerStarted","Data":"686c1b300b2d47323fce6cbdc389d97fe4299adeaad01a30002f407fd715f0c7"} Jan 26 21:13:35 crc kubenswrapper[4899]: I0126 21:13:35.805494 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" event={"ID":"7b93a53e-a97b-4250-9524-332e5b65e329","Type":"ContainerStarted","Data":"77f51a3ebebbf67f87e25a1c7ec0dba5a3656a9387cea8bc5d6b73adee6fcde0"} Jan 26 21:13:35 crc kubenswrapper[4899]: I0126 21:13:35.806035 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" Jan 26 21:13:35 crc kubenswrapper[4899]: I0126 21:13:35.828871 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" podStartSLOduration=1.639362878 podStartE2EDuration="3.828853901s" podCreationTimestamp="2026-01-26 21:13:32 +0000 UTC" firstStartedPulling="2026-01-26 21:13:33.214185051 +0000 UTC m=+1102.595773088" lastFinishedPulling="2026-01-26 21:13:35.403676074 +0000 UTC m=+1104.785264111" observedRunningTime="2026-01-26 21:13:35.822999274 +0000 UTC m=+1105.204587311" watchObservedRunningTime="2026-01-26 21:13:35.828853901 +0000 UTC m=+1105.210441928" Jan 26 21:13:42 crc kubenswrapper[4899]: I0126 21:13:42.743754 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.025267 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-caea-account-create-update-smcmf"] Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.026834 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-caea-account-create-update-smcmf" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.028356 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-db-secret" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.030226 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-db-create-fs9xx"] Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.032148 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t5v7\" (UniqueName: \"kubernetes.io/projected/52c6dc85-792f-4c5f-9082-34a70a742114-kube-api-access-9t5v7\") pod \"manila-caea-account-create-update-smcmf\" (UID: \"52c6dc85-792f-4c5f-9082-34a70a742114\") " pod="manila-kuttl-tests/manila-caea-account-create-update-smcmf" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.032217 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52c6dc85-792f-4c5f-9082-34a70a742114-operator-scripts\") pod \"manila-caea-account-create-update-smcmf\" (UID: \"52c6dc85-792f-4c5f-9082-34a70a742114\") " pod="manila-kuttl-tests/manila-caea-account-create-update-smcmf" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.034420 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-create-fs9xx" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.042704 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-caea-account-create-update-smcmf"] Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.049997 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-db-create-fs9xx"] Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.134078 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t5v7\" (UniqueName: \"kubernetes.io/projected/52c6dc85-792f-4c5f-9082-34a70a742114-kube-api-access-9t5v7\") pod \"manila-caea-account-create-update-smcmf\" (UID: \"52c6dc85-792f-4c5f-9082-34a70a742114\") " pod="manila-kuttl-tests/manila-caea-account-create-update-smcmf" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.134745 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907ae7f3-9325-49ec-a87a-ff3a39bec840-operator-scripts\") pod \"manila-db-create-fs9xx\" (UID: \"907ae7f3-9325-49ec-a87a-ff3a39bec840\") " pod="manila-kuttl-tests/manila-db-create-fs9xx" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.134953 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52c6dc85-792f-4c5f-9082-34a70a742114-operator-scripts\") pod \"manila-caea-account-create-update-smcmf\" (UID: \"52c6dc85-792f-4c5f-9082-34a70a742114\") " pod="manila-kuttl-tests/manila-caea-account-create-update-smcmf" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.135126 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zc2n\" (UniqueName: \"kubernetes.io/projected/907ae7f3-9325-49ec-a87a-ff3a39bec840-kube-api-access-5zc2n\") pod \"manila-db-create-fs9xx\" (UID: \"907ae7f3-9325-49ec-a87a-ff3a39bec840\") " pod="manila-kuttl-tests/manila-db-create-fs9xx" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.135761 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52c6dc85-792f-4c5f-9082-34a70a742114-operator-scripts\") pod \"manila-caea-account-create-update-smcmf\" (UID: \"52c6dc85-792f-4c5f-9082-34a70a742114\") " pod="manila-kuttl-tests/manila-caea-account-create-update-smcmf" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.160312 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t5v7\" (UniqueName: \"kubernetes.io/projected/52c6dc85-792f-4c5f-9082-34a70a742114-kube-api-access-9t5v7\") pod \"manila-caea-account-create-update-smcmf\" (UID: \"52c6dc85-792f-4c5f-9082-34a70a742114\") " pod="manila-kuttl-tests/manila-caea-account-create-update-smcmf" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.237981 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907ae7f3-9325-49ec-a87a-ff3a39bec840-operator-scripts\") pod \"manila-db-create-fs9xx\" (UID: \"907ae7f3-9325-49ec-a87a-ff3a39bec840\") " pod="manila-kuttl-tests/manila-db-create-fs9xx" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.238260 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zc2n\" (UniqueName: \"kubernetes.io/projected/907ae7f3-9325-49ec-a87a-ff3a39bec840-kube-api-access-5zc2n\") pod \"manila-db-create-fs9xx\" (UID: \"907ae7f3-9325-49ec-a87a-ff3a39bec840\") " pod="manila-kuttl-tests/manila-db-create-fs9xx" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.238726 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907ae7f3-9325-49ec-a87a-ff3a39bec840-operator-scripts\") pod \"manila-db-create-fs9xx\" (UID: \"907ae7f3-9325-49ec-a87a-ff3a39bec840\") " pod="manila-kuttl-tests/manila-db-create-fs9xx" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.257412 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zc2n\" (UniqueName: \"kubernetes.io/projected/907ae7f3-9325-49ec-a87a-ff3a39bec840-kube-api-access-5zc2n\") pod \"manila-db-create-fs9xx\" (UID: \"907ae7f3-9325-49ec-a87a-ff3a39bec840\") " pod="manila-kuttl-tests/manila-db-create-fs9xx" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.348762 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-caea-account-create-update-smcmf" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.358670 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-create-fs9xx" Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.843510 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-caea-account-create-update-smcmf"] Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.896510 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-caea-account-create-update-smcmf" event={"ID":"52c6dc85-792f-4c5f-9082-34a70a742114","Type":"ContainerStarted","Data":"fbcf68ffcd171283cacf33638e90c2fcc3d6e66da3afd1c5f05e003d88091cd9"} Jan 26 21:13:46 crc kubenswrapper[4899]: I0126 21:13:46.956132 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-db-create-fs9xx"] Jan 26 21:13:46 crc kubenswrapper[4899]: W0126 21:13:46.966550 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod907ae7f3_9325_49ec_a87a_ff3a39bec840.slice/crio-e3b8990ec5ed50edb9d853965965ab3bc4795e3029d2b0a867da0646e7986a90 WatchSource:0}: Error finding container e3b8990ec5ed50edb9d853965965ab3bc4795e3029d2b0a867da0646e7986a90: Status 404 returned error can't find the container with id e3b8990ec5ed50edb9d853965965ab3bc4795e3029d2b0a867da0646e7986a90 Jan 26 21:13:47 crc kubenswrapper[4899]: I0126 21:13:47.903996 4899 generic.go:334] "Generic (PLEG): container finished" podID="907ae7f3-9325-49ec-a87a-ff3a39bec840" containerID="8ecf27c40c20b8ae4e41e82b29bc3326f03e7eabdc3a698331d8e84bcb44660a" exitCode=0 Jan 26 21:13:47 crc kubenswrapper[4899]: I0126 21:13:47.904117 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-create-fs9xx" event={"ID":"907ae7f3-9325-49ec-a87a-ff3a39bec840","Type":"ContainerDied","Data":"8ecf27c40c20b8ae4e41e82b29bc3326f03e7eabdc3a698331d8e84bcb44660a"} Jan 26 21:13:47 crc kubenswrapper[4899]: I0126 21:13:47.904312 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-create-fs9xx" event={"ID":"907ae7f3-9325-49ec-a87a-ff3a39bec840","Type":"ContainerStarted","Data":"e3b8990ec5ed50edb9d853965965ab3bc4795e3029d2b0a867da0646e7986a90"} Jan 26 21:13:47 crc kubenswrapper[4899]: I0126 21:13:47.906426 4899 generic.go:334] "Generic (PLEG): container finished" podID="52c6dc85-792f-4c5f-9082-34a70a742114" containerID="a32cc094b587ebae9b7509d546f95399c82ee7ad73fdc5c08f331277ada92de0" exitCode=0 Jan 26 21:13:47 crc kubenswrapper[4899]: I0126 21:13:47.906455 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-caea-account-create-update-smcmf" event={"ID":"52c6dc85-792f-4c5f-9082-34a70a742114","Type":"ContainerDied","Data":"a32cc094b587ebae9b7509d546f95399c82ee7ad73fdc5c08f331277ada92de0"} Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.249500 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-caea-account-create-update-smcmf" Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.322062 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-create-fs9xx" Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.376952 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52c6dc85-792f-4c5f-9082-34a70a742114-operator-scripts\") pod \"52c6dc85-792f-4c5f-9082-34a70a742114\" (UID: \"52c6dc85-792f-4c5f-9082-34a70a742114\") " Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.377081 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t5v7\" (UniqueName: \"kubernetes.io/projected/52c6dc85-792f-4c5f-9082-34a70a742114-kube-api-access-9t5v7\") pod \"52c6dc85-792f-4c5f-9082-34a70a742114\" (UID: \"52c6dc85-792f-4c5f-9082-34a70a742114\") " Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.377943 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52c6dc85-792f-4c5f-9082-34a70a742114-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52c6dc85-792f-4c5f-9082-34a70a742114" (UID: "52c6dc85-792f-4c5f-9082-34a70a742114"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.382898 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52c6dc85-792f-4c5f-9082-34a70a742114-kube-api-access-9t5v7" (OuterVolumeSpecName: "kube-api-access-9t5v7") pod "52c6dc85-792f-4c5f-9082-34a70a742114" (UID: "52c6dc85-792f-4c5f-9082-34a70a742114"). InnerVolumeSpecName "kube-api-access-9t5v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.478203 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zc2n\" (UniqueName: \"kubernetes.io/projected/907ae7f3-9325-49ec-a87a-ff3a39bec840-kube-api-access-5zc2n\") pod \"907ae7f3-9325-49ec-a87a-ff3a39bec840\" (UID: \"907ae7f3-9325-49ec-a87a-ff3a39bec840\") " Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.478591 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907ae7f3-9325-49ec-a87a-ff3a39bec840-operator-scripts\") pod \"907ae7f3-9325-49ec-a87a-ff3a39bec840\" (UID: \"907ae7f3-9325-49ec-a87a-ff3a39bec840\") " Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.478985 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t5v7\" (UniqueName: \"kubernetes.io/projected/52c6dc85-792f-4c5f-9082-34a70a742114-kube-api-access-9t5v7\") on node \"crc\" DevicePath \"\"" Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.479065 4899 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52c6dc85-792f-4c5f-9082-34a70a742114-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.479139 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/907ae7f3-9325-49ec-a87a-ff3a39bec840-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "907ae7f3-9325-49ec-a87a-ff3a39bec840" (UID: "907ae7f3-9325-49ec-a87a-ff3a39bec840"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.484455 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907ae7f3-9325-49ec-a87a-ff3a39bec840-kube-api-access-5zc2n" (OuterVolumeSpecName: "kube-api-access-5zc2n") pod "907ae7f3-9325-49ec-a87a-ff3a39bec840" (UID: "907ae7f3-9325-49ec-a87a-ff3a39bec840"). InnerVolumeSpecName "kube-api-access-5zc2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.580637 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zc2n\" (UniqueName: \"kubernetes.io/projected/907ae7f3-9325-49ec-a87a-ff3a39bec840-kube-api-access-5zc2n\") on node \"crc\" DevicePath \"\"" Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.580675 4899 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907ae7f3-9325-49ec-a87a-ff3a39bec840-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.923186 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-create-fs9xx" event={"ID":"907ae7f3-9325-49ec-a87a-ff3a39bec840","Type":"ContainerDied","Data":"e3b8990ec5ed50edb9d853965965ab3bc4795e3029d2b0a867da0646e7986a90"} Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.923228 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3b8990ec5ed50edb9d853965965ab3bc4795e3029d2b0a867da0646e7986a90" Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.923271 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-create-fs9xx" Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.924792 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-caea-account-create-update-smcmf" event={"ID":"52c6dc85-792f-4c5f-9082-34a70a742114","Type":"ContainerDied","Data":"fbcf68ffcd171283cacf33638e90c2fcc3d6e66da3afd1c5f05e003d88091cd9"} Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.924913 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbcf68ffcd171283cacf33638e90c2fcc3d6e66da3afd1c5f05e003d88091cd9" Jan 26 21:13:49 crc kubenswrapper[4899]: I0126 21:13:49.924824 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-caea-account-create-update-smcmf" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.356368 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-db-sync-jtd8b"] Jan 26 21:13:51 crc kubenswrapper[4899]: E0126 21:13:51.357063 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52c6dc85-792f-4c5f-9082-34a70a742114" containerName="mariadb-account-create-update" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.357082 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="52c6dc85-792f-4c5f-9082-34a70a742114" containerName="mariadb-account-create-update" Jan 26 21:13:51 crc kubenswrapper[4899]: E0126 21:13:51.357100 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907ae7f3-9325-49ec-a87a-ff3a39bec840" containerName="mariadb-database-create" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.357108 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="907ae7f3-9325-49ec-a87a-ff3a39bec840" containerName="mariadb-database-create" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.357259 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="52c6dc85-792f-4c5f-9082-34a70a742114" containerName="mariadb-account-create-update" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.357278 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="907ae7f3-9325-49ec-a87a-ff3a39bec840" containerName="mariadb-database-create" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.357805 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-sync-jtd8b" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.360509 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-manila-dockercfg-nckfv" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.360959 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-config-data" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.368697 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-db-sync-jtd8b"] Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.507266 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjkht\" (UniqueName: \"kubernetes.io/projected/5e0a325c-b753-4730-aba3-4c0b59e79b43-kube-api-access-fjkht\") pod \"manila-db-sync-jtd8b\" (UID: \"5e0a325c-b753-4730-aba3-4c0b59e79b43\") " pod="manila-kuttl-tests/manila-db-sync-jtd8b" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.507401 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0a325c-b753-4730-aba3-4c0b59e79b43-config-data\") pod \"manila-db-sync-jtd8b\" (UID: \"5e0a325c-b753-4730-aba3-4c0b59e79b43\") " pod="manila-kuttl-tests/manila-db-sync-jtd8b" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.507503 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/5e0a325c-b753-4730-aba3-4c0b59e79b43-job-config-data\") pod \"manila-db-sync-jtd8b\" (UID: \"5e0a325c-b753-4730-aba3-4c0b59e79b43\") " pod="manila-kuttl-tests/manila-db-sync-jtd8b" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.608473 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjkht\" (UniqueName: \"kubernetes.io/projected/5e0a325c-b753-4730-aba3-4c0b59e79b43-kube-api-access-fjkht\") pod \"manila-db-sync-jtd8b\" (UID: \"5e0a325c-b753-4730-aba3-4c0b59e79b43\") " pod="manila-kuttl-tests/manila-db-sync-jtd8b" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.608574 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0a325c-b753-4730-aba3-4c0b59e79b43-config-data\") pod \"manila-db-sync-jtd8b\" (UID: \"5e0a325c-b753-4730-aba3-4c0b59e79b43\") " pod="manila-kuttl-tests/manila-db-sync-jtd8b" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.608623 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/5e0a325c-b753-4730-aba3-4c0b59e79b43-job-config-data\") pod \"manila-db-sync-jtd8b\" (UID: \"5e0a325c-b753-4730-aba3-4c0b59e79b43\") " pod="manila-kuttl-tests/manila-db-sync-jtd8b" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.617662 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/5e0a325c-b753-4730-aba3-4c0b59e79b43-job-config-data\") pod \"manila-db-sync-jtd8b\" (UID: \"5e0a325c-b753-4730-aba3-4c0b59e79b43\") " pod="manila-kuttl-tests/manila-db-sync-jtd8b" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.618564 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0a325c-b753-4730-aba3-4c0b59e79b43-config-data\") pod \"manila-db-sync-jtd8b\" (UID: \"5e0a325c-b753-4730-aba3-4c0b59e79b43\") " pod="manila-kuttl-tests/manila-db-sync-jtd8b" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.626618 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjkht\" (UniqueName: \"kubernetes.io/projected/5e0a325c-b753-4730-aba3-4c0b59e79b43-kube-api-access-fjkht\") pod \"manila-db-sync-jtd8b\" (UID: \"5e0a325c-b753-4730-aba3-4c0b59e79b43\") " pod="manila-kuttl-tests/manila-db-sync-jtd8b" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.721538 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-sync-jtd8b" Jan 26 21:13:51 crc kubenswrapper[4899]: I0126 21:13:51.944547 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-db-sync-jtd8b"] Jan 26 21:13:52 crc kubenswrapper[4899]: I0126 21:13:52.948973 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-sync-jtd8b" event={"ID":"5e0a325c-b753-4730-aba3-4c0b59e79b43","Type":"ContainerStarted","Data":"90b999bd77b23db8b162be85c4242b3edfc687f24bcd3598f28f4581f0de4067"} Jan 26 21:14:00 crc kubenswrapper[4899]: I0126 21:14:00.110024 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:14:00 crc kubenswrapper[4899]: I0126 21:14:00.110537 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:14:06 crc kubenswrapper[4899]: I0126 21:14:06.122815 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-sync-jtd8b" event={"ID":"5e0a325c-b753-4730-aba3-4c0b59e79b43","Type":"ContainerStarted","Data":"4a0238be6bd14d1b7d37e317ac6550ae222d03d3441ce68f6a5ac116ea49192f"} Jan 26 21:14:06 crc kubenswrapper[4899]: I0126 21:14:06.143562 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-db-sync-jtd8b" podStartSLOduration=1.8725889919999998 podStartE2EDuration="15.143539831s" podCreationTimestamp="2026-01-26 21:13:51 +0000 UTC" firstStartedPulling="2026-01-26 21:13:51.952370969 +0000 UTC m=+1121.333959006" lastFinishedPulling="2026-01-26 21:14:05.223321808 +0000 UTC m=+1134.604909845" observedRunningTime="2026-01-26 21:14:06.143092948 +0000 UTC m=+1135.524681175" watchObservedRunningTime="2026-01-26 21:14:06.143539831 +0000 UTC m=+1135.525127868" Jan 26 21:14:30 crc kubenswrapper[4899]: I0126 21:14:30.109599 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:14:30 crc kubenswrapper[4899]: I0126 21:14:30.110177 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:14:57 crc kubenswrapper[4899]: I0126 21:14:57.510271 4899 generic.go:334] "Generic (PLEG): container finished" podID="5e0a325c-b753-4730-aba3-4c0b59e79b43" containerID="4a0238be6bd14d1b7d37e317ac6550ae222d03d3441ce68f6a5ac116ea49192f" exitCode=0 Jan 26 21:14:57 crc kubenswrapper[4899]: I0126 21:14:57.510349 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-sync-jtd8b" event={"ID":"5e0a325c-b753-4730-aba3-4c0b59e79b43","Type":"ContainerDied","Data":"4a0238be6bd14d1b7d37e317ac6550ae222d03d3441ce68f6a5ac116ea49192f"} Jan 26 21:14:58 crc kubenswrapper[4899]: I0126 21:14:58.836619 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-sync-jtd8b" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.022596 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjkht\" (UniqueName: \"kubernetes.io/projected/5e0a325c-b753-4730-aba3-4c0b59e79b43-kube-api-access-fjkht\") pod \"5e0a325c-b753-4730-aba3-4c0b59e79b43\" (UID: \"5e0a325c-b753-4730-aba3-4c0b59e79b43\") " Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.022745 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0a325c-b753-4730-aba3-4c0b59e79b43-config-data\") pod \"5e0a325c-b753-4730-aba3-4c0b59e79b43\" (UID: \"5e0a325c-b753-4730-aba3-4c0b59e79b43\") " Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.022819 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/5e0a325c-b753-4730-aba3-4c0b59e79b43-job-config-data\") pod \"5e0a325c-b753-4730-aba3-4c0b59e79b43\" (UID: \"5e0a325c-b753-4730-aba3-4c0b59e79b43\") " Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.031115 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0a325c-b753-4730-aba3-4c0b59e79b43-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "5e0a325c-b753-4730-aba3-4c0b59e79b43" (UID: "5e0a325c-b753-4730-aba3-4c0b59e79b43"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.031466 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0a325c-b753-4730-aba3-4c0b59e79b43-kube-api-access-fjkht" (OuterVolumeSpecName: "kube-api-access-fjkht") pod "5e0a325c-b753-4730-aba3-4c0b59e79b43" (UID: "5e0a325c-b753-4730-aba3-4c0b59e79b43"). InnerVolumeSpecName "kube-api-access-fjkht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.037973 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0a325c-b753-4730-aba3-4c0b59e79b43-config-data" (OuterVolumeSpecName: "config-data") pod "5e0a325c-b753-4730-aba3-4c0b59e79b43" (UID: "5e0a325c-b753-4730-aba3-4c0b59e79b43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.139565 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjkht\" (UniqueName: \"kubernetes.io/projected/5e0a325c-b753-4730-aba3-4c0b59e79b43-kube-api-access-fjkht\") on node \"crc\" DevicePath \"\"" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.139624 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0a325c-b753-4730-aba3-4c0b59e79b43-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.139651 4899 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/5e0a325c-b753-4730-aba3-4c0b59e79b43-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.532295 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-sync-jtd8b" event={"ID":"5e0a325c-b753-4730-aba3-4c0b59e79b43","Type":"ContainerDied","Data":"90b999bd77b23db8b162be85c4242b3edfc687f24bcd3598f28f4581f0de4067"} Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.532354 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90b999bd77b23db8b162be85c4242b3edfc687f24bcd3598f28f4581f0de4067" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.532509 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-sync-jtd8b" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.955803 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:14:59 crc kubenswrapper[4899]: E0126 21:14:59.956166 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0a325c-b753-4730-aba3-4c0b59e79b43" containerName="manila-db-sync" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.956184 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0a325c-b753-4730-aba3-4c0b59e79b43" containerName="manila-db-sync" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.956814 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0a325c-b753-4730-aba3-4c0b59e79b43" containerName="manila-db-sync" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.957720 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.960362 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-scheduler-config-data" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.960602 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-scripts" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.960753 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-config-data" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.963721 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-manila-dockercfg-nckfv" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.977900 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.979145 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.982845 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-share-share0-config-data" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.983558 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"ceph-conf-files" Jan 26 21:14:59 crc kubenswrapper[4899]: I0126 21:14:59.987800 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:14:59.997485 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.109756 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.109823 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.109879 4899 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.110639 4899 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c6a068c1dcea571cec247005b623b6639c13ba7d6fb0ff472c9f5743612c521"} pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.110717 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" containerID="cri-o://4c6a068c1dcea571cec247005b623b6639c13ba7d6fb0ff472c9f5743612c521" gracePeriod=600 Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.129338 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.131045 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.133516 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-api-config-data" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.139125 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp"] Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.140408 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.145512 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.145535 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.147459 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.154669 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.154763 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4fd6983-8480-4ead-a384-3aaf4eba13a9-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.154795 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx2wc\" (UniqueName: \"kubernetes.io/projected/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-kube-api-access-gx2wc\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.154855 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-scripts\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.154890 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-etc-machine-id\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.154975 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-var-lib-manila\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.155032 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-config-data-custom\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.155094 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-ceph\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.155132 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-config-data\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.155179 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bbpg\" (UniqueName: \"kubernetes.io/projected/b4fd6983-8480-4ead-a384-3aaf4eba13a9-kube-api-access-4bbpg\") pod \"manila-scheduler-0\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.155217 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-config-data\") pod \"manila-scheduler-0\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.155247 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-scripts\") pod \"manila-scheduler-0\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.157632 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp"] Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.257325 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4fd6983-8480-4ead-a384-3aaf4eba13a9-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.257805 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4144d880-4cba-40ec-afd1-e83576312122-etc-machine-id\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.257833 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx2wc\" (UniqueName: \"kubernetes.io/projected/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-kube-api-access-gx2wc\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.257477 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4fd6983-8480-4ead-a384-3aaf4eba13a9-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.257883 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-scripts\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258103 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4144d880-4cba-40ec-afd1-e83576312122-logs\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258186 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-etc-machine-id\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258226 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc435016-c4b2-4dfe-841b-d192bb50da46-secret-volume\") pod \"collect-profiles-29491035-4zxkp\" (UID: \"cc435016-c4b2-4dfe-841b-d192bb50da46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258260 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-var-lib-manila\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258295 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-scripts\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258339 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-config-data\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258378 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-etc-machine-id\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258444 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swxdk\" (UniqueName: \"kubernetes.io/projected/cc435016-c4b2-4dfe-841b-d192bb50da46-kube-api-access-swxdk\") pod \"collect-profiles-29491035-4zxkp\" (UID: \"cc435016-c4b2-4dfe-841b-d192bb50da46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258478 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-config-data-custom\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258521 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc435016-c4b2-4dfe-841b-d192bb50da46-config-volume\") pod \"collect-profiles-29491035-4zxkp\" (UID: \"cc435016-c4b2-4dfe-841b-d192bb50da46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258547 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-ceph\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258570 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-var-lib-manila\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258592 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-config-data\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258653 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-config-data-custom\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258686 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bbpg\" (UniqueName: \"kubernetes.io/projected/b4fd6983-8480-4ead-a384-3aaf4eba13a9-kube-api-access-4bbpg\") pod \"manila-scheduler-0\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258716 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-config-data\") pod \"manila-scheduler-0\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258741 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-scripts\") pod \"manila-scheduler-0\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258766 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.258794 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j9sl\" (UniqueName: \"kubernetes.io/projected/4144d880-4cba-40ec-afd1-e83576312122-kube-api-access-6j9sl\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.265361 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-scripts\") pod \"manila-scheduler-0\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.265782 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-ceph\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.268235 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-config-data\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.268585 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-config-data-custom\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.269844 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-scripts\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.271358 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-config-data\") pod \"manila-scheduler-0\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.277197 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.279957 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx2wc\" (UniqueName: \"kubernetes.io/projected/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-kube-api-access-gx2wc\") pod \"manila-share-share0-0\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.284146 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bbpg\" (UniqueName: \"kubernetes.io/projected/b4fd6983-8480-4ead-a384-3aaf4eba13a9-kube-api-access-4bbpg\") pod \"manila-scheduler-0\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.298418 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.360341 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4144d880-4cba-40ec-afd1-e83576312122-logs\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.360548 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc435016-c4b2-4dfe-841b-d192bb50da46-secret-volume\") pod \"collect-profiles-29491035-4zxkp\" (UID: \"cc435016-c4b2-4dfe-841b-d192bb50da46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.360654 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-scripts\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.360742 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-config-data\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.360863 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swxdk\" (UniqueName: \"kubernetes.io/projected/cc435016-c4b2-4dfe-841b-d192bb50da46-kube-api-access-swxdk\") pod \"collect-profiles-29491035-4zxkp\" (UID: \"cc435016-c4b2-4dfe-841b-d192bb50da46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.360982 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc435016-c4b2-4dfe-841b-d192bb50da46-config-volume\") pod \"collect-profiles-29491035-4zxkp\" (UID: \"cc435016-c4b2-4dfe-841b-d192bb50da46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.361111 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-config-data-custom\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.361234 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j9sl\" (UniqueName: \"kubernetes.io/projected/4144d880-4cba-40ec-afd1-e83576312122-kube-api-access-6j9sl\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.361335 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4144d880-4cba-40ec-afd1-e83576312122-etc-machine-id\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.361417 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4144d880-4cba-40ec-afd1-e83576312122-logs\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.361854 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4144d880-4cba-40ec-afd1-e83576312122-etc-machine-id\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.362737 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc435016-c4b2-4dfe-841b-d192bb50da46-config-volume\") pod \"collect-profiles-29491035-4zxkp\" (UID: \"cc435016-c4b2-4dfe-841b-d192bb50da46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.365246 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc435016-c4b2-4dfe-841b-d192bb50da46-secret-volume\") pod \"collect-profiles-29491035-4zxkp\" (UID: \"cc435016-c4b2-4dfe-841b-d192bb50da46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.365307 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-scripts\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.366142 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-config-data\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.378477 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-config-data-custom\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.380587 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j9sl\" (UniqueName: \"kubernetes.io/projected/4144d880-4cba-40ec-afd1-e83576312122-kube-api-access-6j9sl\") pod \"manila-api-0\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.387350 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swxdk\" (UniqueName: \"kubernetes.io/projected/cc435016-c4b2-4dfe-841b-d192bb50da46-kube-api-access-swxdk\") pod \"collect-profiles-29491035-4zxkp\" (UID: \"cc435016-c4b2-4dfe-841b-d192bb50da46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.448462 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.457824 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.550665 4899 generic.go:334] "Generic (PLEG): container finished" podID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerID="4c6a068c1dcea571cec247005b623b6639c13ba7d6fb0ff472c9f5743612c521" exitCode=0 Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.550728 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerDied","Data":"4c6a068c1dcea571cec247005b623b6639c13ba7d6fb0ff472c9f5743612c521"} Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.550768 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerStarted","Data":"b003bc5d33f730ffb57f781e8537058a3b7ee2bda8e0f8bdef749775797532a8"} Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.550791 4899 scope.go:117] "RemoveContainer" containerID="aa9a721fc5929ae1bb2ab8e526b3f1d389e06cec08eea583da85a23029b223fe" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.574538 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.578189 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.908593 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:15:00 crc kubenswrapper[4899]: W0126 21:15:00.911574 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4144d880_4cba_40ec_afd1_e83576312122.slice/crio-990f574467ae7a09c8c512e455981e3e785ededb5421f27eea317963c5d0b2fa WatchSource:0}: Error finding container 990f574467ae7a09c8c512e455981e3e785ededb5421f27eea317963c5d0b2fa: Status 404 returned error can't find the container with id 990f574467ae7a09c8c512e455981e3e785ededb5421f27eea317963c5d0b2fa Jan 26 21:15:00 crc kubenswrapper[4899]: I0126 21:15:00.953962 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp"] Jan 26 21:15:01 crc kubenswrapper[4899]: I0126 21:15:01.309900 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:15:01 crc kubenswrapper[4899]: I0126 21:15:01.561713 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"4144d880-4cba-40ec-afd1-e83576312122","Type":"ContainerStarted","Data":"b522f198ad54eaf6625a638155ec13a0cfbb72e1ddb9e74ebe7dae67c1fef2a2"} Jan 26 21:15:01 crc kubenswrapper[4899]: I0126 21:15:01.562139 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"4144d880-4cba-40ec-afd1-e83576312122","Type":"ContainerStarted","Data":"990f574467ae7a09c8c512e455981e3e785ededb5421f27eea317963c5d0b2fa"} Jan 26 21:15:01 crc kubenswrapper[4899]: I0126 21:15:01.564771 4899 generic.go:334] "Generic (PLEG): container finished" podID="cc435016-c4b2-4dfe-841b-d192bb50da46" containerID="3d97f45dda8f59d135ca163257732adbc21bb2d3ba758505ffad3bd0e23ec239" exitCode=0 Jan 26 21:15:01 crc kubenswrapper[4899]: I0126 21:15:01.564868 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp" event={"ID":"cc435016-c4b2-4dfe-841b-d192bb50da46","Type":"ContainerDied","Data":"3d97f45dda8f59d135ca163257732adbc21bb2d3ba758505ffad3bd0e23ec239"} Jan 26 21:15:01 crc kubenswrapper[4899]: I0126 21:15:01.564912 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp" event={"ID":"cc435016-c4b2-4dfe-841b-d192bb50da46","Type":"ContainerStarted","Data":"832f5dc61df8d894539b667417c7ae9e2e0f5b867874fd9bd100745c7b2547ed"} Jan 26 21:15:01 crc kubenswrapper[4899]: I0126 21:15:01.572700 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6","Type":"ContainerStarted","Data":"982dfcadc89032738565a279f978e5d749156fc1a7392976f2fccd8adf351074"} Jan 26 21:15:01 crc kubenswrapper[4899]: I0126 21:15:01.574156 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"b4fd6983-8480-4ead-a384-3aaf4eba13a9","Type":"ContainerStarted","Data":"602e3fde12f603fe0ae5bfde3b5423a0fec258e2527ff2f12a51ebb4c3ce7d78"} Jan 26 21:15:02 crc kubenswrapper[4899]: I0126 21:15:02.641115 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"b4fd6983-8480-4ead-a384-3aaf4eba13a9","Type":"ContainerStarted","Data":"1a3b7ce742c4ad6ed94cde899f823c98eb81d72c524242b5783c4964e6089d51"} Jan 26 21:15:02 crc kubenswrapper[4899]: I0126 21:15:02.644708 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"4144d880-4cba-40ec-afd1-e83576312122","Type":"ContainerStarted","Data":"600c3d3d8e429160f330fb905bb757738decba3ab0664d72e7acbaabe867b43b"} Jan 26 21:15:02 crc kubenswrapper[4899]: I0126 21:15:02.644789 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:03 crc kubenswrapper[4899]: I0126 21:15:03.343916 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp" Jan 26 21:15:03 crc kubenswrapper[4899]: I0126 21:15:03.360953 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-api-0" podStartSLOduration=3.360914501 podStartE2EDuration="3.360914501s" podCreationTimestamp="2026-01-26 21:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:15:02.681154859 +0000 UTC m=+1192.062742916" watchObservedRunningTime="2026-01-26 21:15:03.360914501 +0000 UTC m=+1192.742502538" Jan 26 21:15:03 crc kubenswrapper[4899]: I0126 21:15:03.407415 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc435016-c4b2-4dfe-841b-d192bb50da46-config-volume\") pod \"cc435016-c4b2-4dfe-841b-d192bb50da46\" (UID: \"cc435016-c4b2-4dfe-841b-d192bb50da46\") " Jan 26 21:15:03 crc kubenswrapper[4899]: I0126 21:15:03.407534 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc435016-c4b2-4dfe-841b-d192bb50da46-secret-volume\") pod \"cc435016-c4b2-4dfe-841b-d192bb50da46\" (UID: \"cc435016-c4b2-4dfe-841b-d192bb50da46\") " Jan 26 21:15:03 crc kubenswrapper[4899]: I0126 21:15:03.407578 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swxdk\" (UniqueName: \"kubernetes.io/projected/cc435016-c4b2-4dfe-841b-d192bb50da46-kube-api-access-swxdk\") pod \"cc435016-c4b2-4dfe-841b-d192bb50da46\" (UID: \"cc435016-c4b2-4dfe-841b-d192bb50da46\") " Jan 26 21:15:03 crc kubenswrapper[4899]: I0126 21:15:03.412604 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc435016-c4b2-4dfe-841b-d192bb50da46-config-volume" (OuterVolumeSpecName: "config-volume") pod "cc435016-c4b2-4dfe-841b-d192bb50da46" (UID: "cc435016-c4b2-4dfe-841b-d192bb50da46"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:15:03 crc kubenswrapper[4899]: I0126 21:15:03.468195 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc435016-c4b2-4dfe-841b-d192bb50da46-kube-api-access-swxdk" (OuterVolumeSpecName: "kube-api-access-swxdk") pod "cc435016-c4b2-4dfe-841b-d192bb50da46" (UID: "cc435016-c4b2-4dfe-841b-d192bb50da46"). InnerVolumeSpecName "kube-api-access-swxdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:15:03 crc kubenswrapper[4899]: I0126 21:15:03.509705 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swxdk\" (UniqueName: \"kubernetes.io/projected/cc435016-c4b2-4dfe-841b-d192bb50da46-kube-api-access-swxdk\") on node \"crc\" DevicePath \"\"" Jan 26 21:15:03 crc kubenswrapper[4899]: I0126 21:15:03.509738 4899 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc435016-c4b2-4dfe-841b-d192bb50da46-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 21:15:03 crc kubenswrapper[4899]: I0126 21:15:03.542114 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc435016-c4b2-4dfe-841b-d192bb50da46-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cc435016-c4b2-4dfe-841b-d192bb50da46" (UID: "cc435016-c4b2-4dfe-841b-d192bb50da46"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:15:03 crc kubenswrapper[4899]: I0126 21:15:03.611000 4899 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc435016-c4b2-4dfe-841b-d192bb50da46-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 21:15:03 crc kubenswrapper[4899]: I0126 21:15:03.652715 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp" event={"ID":"cc435016-c4b2-4dfe-841b-d192bb50da46","Type":"ContainerDied","Data":"832f5dc61df8d894539b667417c7ae9e2e0f5b867874fd9bd100745c7b2547ed"} Jan 26 21:15:03 crc kubenswrapper[4899]: I0126 21:15:03.652749 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491035-4zxkp" Jan 26 21:15:03 crc kubenswrapper[4899]: I0126 21:15:03.652762 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="832f5dc61df8d894539b667417c7ae9e2e0f5b867874fd9bd100745c7b2547ed" Jan 26 21:15:03 crc kubenswrapper[4899]: I0126 21:15:03.656042 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"b4fd6983-8480-4ead-a384-3aaf4eba13a9","Type":"ContainerStarted","Data":"e69b6aa22198c29f8295a20aad605a558ef6f075118e91f1fae04b7548931baf"} Jan 26 21:15:03 crc kubenswrapper[4899]: I0126 21:15:03.684304 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-scheduler-0" podStartSLOduration=3.950408124 podStartE2EDuration="4.684266724s" podCreationTimestamp="2026-01-26 21:14:59 +0000 UTC" firstStartedPulling="2026-01-26 21:15:01.334084881 +0000 UTC m=+1190.715672918" lastFinishedPulling="2026-01-26 21:15:02.067943461 +0000 UTC m=+1191.449531518" observedRunningTime="2026-01-26 21:15:03.677804849 +0000 UTC m=+1193.059392886" watchObservedRunningTime="2026-01-26 21:15:03.684266724 +0000 UTC m=+1193.065854761" Jan 26 21:15:10 crc kubenswrapper[4899]: I0126 21:15:10.575412 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:10 crc kubenswrapper[4899]: I0126 21:15:10.739588 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6","Type":"ContainerStarted","Data":"7ab19a2ac1e7620c7f0b2ef6dbb0e3e9885598104441b3ea97182bba6ad0c7da"} Jan 26 21:15:10 crc kubenswrapper[4899]: I0126 21:15:10.739642 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6","Type":"ContainerStarted","Data":"5f9875fc9fce1b5141f2ff9f6d58007ef642b580bc0f12372548a40ad8a5be21"} Jan 26 21:15:10 crc kubenswrapper[4899]: I0126 21:15:10.780996 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-share-share0-0" podStartSLOduration=2.760075052 podStartE2EDuration="11.780977382s" podCreationTimestamp="2026-01-26 21:14:59 +0000 UTC" firstStartedPulling="2026-01-26 21:15:00.589091123 +0000 UTC m=+1189.970679160" lastFinishedPulling="2026-01-26 21:15:09.609993453 +0000 UTC m=+1198.991581490" observedRunningTime="2026-01-26 21:15:10.770357978 +0000 UTC m=+1200.151946015" watchObservedRunningTime="2026-01-26 21:15:10.780977382 +0000 UTC m=+1200.162565419" Jan 26 21:15:20 crc kubenswrapper[4899]: I0126 21:15:20.299032 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:23 crc kubenswrapper[4899]: I0126 21:15:23.728817 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:15:24 crc kubenswrapper[4899]: I0126 21:15:24.179382 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:15:24 crc kubenswrapper[4899]: I0126 21:15:24.258399 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.549862 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-api-2"] Jan 26 21:15:26 crc kubenswrapper[4899]: E0126 21:15:26.550483 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc435016-c4b2-4dfe-841b-d192bb50da46" containerName="collect-profiles" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.550501 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc435016-c4b2-4dfe-841b-d192bb50da46" containerName="collect-profiles" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.550674 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc435016-c4b2-4dfe-841b-d192bb50da46" containerName="collect-profiles" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.551672 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.555947 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-api-1"] Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.557102 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.573077 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-api-1"] Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.579310 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-api-2"] Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.616455 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-scripts\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.616490 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a25996ab-6de5-4fff-9814-81fab296bfba-logs\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.616535 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-etc-machine-id\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.616575 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-config-data\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.616598 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-config-data-custom\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.616625 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-config-data\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.616643 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqdt8\" (UniqueName: \"kubernetes.io/projected/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-kube-api-access-cqdt8\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.616664 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-config-data-custom\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.616683 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjdwf\" (UniqueName: \"kubernetes.io/projected/a25996ab-6de5-4fff-9814-81fab296bfba-kube-api-access-fjdwf\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.616699 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a25996ab-6de5-4fff-9814-81fab296bfba-etc-machine-id\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.616721 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-logs\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.616748 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-scripts\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.717587 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a25996ab-6de5-4fff-9814-81fab296bfba-logs\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.717627 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-scripts\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.717676 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-etc-machine-id\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.717711 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-config-data\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.717728 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-config-data-custom\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.717751 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-config-data\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.717772 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqdt8\" (UniqueName: \"kubernetes.io/projected/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-kube-api-access-cqdt8\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.717796 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-config-data-custom\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.717814 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjdwf\" (UniqueName: \"kubernetes.io/projected/a25996ab-6de5-4fff-9814-81fab296bfba-kube-api-access-fjdwf\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.717812 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-etc-machine-id\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.717860 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a25996ab-6de5-4fff-9814-81fab296bfba-etc-machine-id\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.717887 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-logs\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.717965 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a25996ab-6de5-4fff-9814-81fab296bfba-etc-machine-id\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.718057 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-scripts\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.718239 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a25996ab-6de5-4fff-9814-81fab296bfba-logs\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.718520 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-logs\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.723070 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-scripts\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.723212 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-scripts\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.724283 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-config-data-custom\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.725763 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-config-data-custom\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.735073 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-config-data\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.738022 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjdwf\" (UniqueName: \"kubernetes.io/projected/a25996ab-6de5-4fff-9814-81fab296bfba-kube-api-access-fjdwf\") pod \"manila-api-1\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.738103 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-config-data\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.740490 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqdt8\" (UniqueName: \"kubernetes.io/projected/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-kube-api-access-cqdt8\") pod \"manila-api-2\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.871567 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:26 crc kubenswrapper[4899]: I0126 21:15:26.888307 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:27 crc kubenswrapper[4899]: I0126 21:15:27.215269 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-api-2"] Jan 26 21:15:27 crc kubenswrapper[4899]: W0126 21:15:27.221128 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca9650df_33a4_4bf7_ba79_04d7ecbcf7b5.slice/crio-c33e120e626b89734d10943ea317efc8abe6d3758d78a38ecc5ddf3a2fceaacb WatchSource:0}: Error finding container c33e120e626b89734d10943ea317efc8abe6d3758d78a38ecc5ddf3a2fceaacb: Status 404 returned error can't find the container with id c33e120e626b89734d10943ea317efc8abe6d3758d78a38ecc5ddf3a2fceaacb Jan 26 21:15:27 crc kubenswrapper[4899]: I0126 21:15:27.375197 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-api-1"] Jan 26 21:15:27 crc kubenswrapper[4899]: W0126 21:15:27.381071 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda25996ab_6de5_4fff_9814_81fab296bfba.slice/crio-03f20b1a4e4a98831274f7650fd9087821559499ae423a3b952eba8ea95ce424 WatchSource:0}: Error finding container 03f20b1a4e4a98831274f7650fd9087821559499ae423a3b952eba8ea95ce424: Status 404 returned error can't find the container with id 03f20b1a4e4a98831274f7650fd9087821559499ae423a3b952eba8ea95ce424 Jan 26 21:15:28 crc kubenswrapper[4899]: I0126 21:15:28.075560 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-1" event={"ID":"a25996ab-6de5-4fff-9814-81fab296bfba","Type":"ContainerStarted","Data":"b7fce5688b8a06d3b98b82eb69272224291e6cf3348c817068fe3407724e5500"} Jan 26 21:15:28 crc kubenswrapper[4899]: I0126 21:15:28.076211 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-1" event={"ID":"a25996ab-6de5-4fff-9814-81fab296bfba","Type":"ContainerStarted","Data":"03f20b1a4e4a98831274f7650fd9087821559499ae423a3b952eba8ea95ce424"} Jan 26 21:15:28 crc kubenswrapper[4899]: I0126 21:15:28.077833 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-2" event={"ID":"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5","Type":"ContainerStarted","Data":"4743075ccbe0b88922959fbe1d6df769e94defa365938199e2012cad801f06a6"} Jan 26 21:15:28 crc kubenswrapper[4899]: I0126 21:15:28.077883 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-2" event={"ID":"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5","Type":"ContainerStarted","Data":"3290666fc97f34ea28d861f73b18b97b4401b5fd1f7419b57a903e178c57c3c3"} Jan 26 21:15:28 crc kubenswrapper[4899]: I0126 21:15:28.077895 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-2" event={"ID":"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5","Type":"ContainerStarted","Data":"c33e120e626b89734d10943ea317efc8abe6d3758d78a38ecc5ddf3a2fceaacb"} Jan 26 21:15:28 crc kubenswrapper[4899]: I0126 21:15:28.077997 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:28 crc kubenswrapper[4899]: I0126 21:15:28.101572 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-api-2" podStartSLOduration=2.101552524 podStartE2EDuration="2.101552524s" podCreationTimestamp="2026-01-26 21:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:15:28.092914117 +0000 UTC m=+1217.474502164" watchObservedRunningTime="2026-01-26 21:15:28.101552524 +0000 UTC m=+1217.483140561" Jan 26 21:15:29 crc kubenswrapper[4899]: I0126 21:15:29.086197 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-1" event={"ID":"a25996ab-6de5-4fff-9814-81fab296bfba","Type":"ContainerStarted","Data":"5ec3a7b7f04a12b4ceabbd84a21092d7216b88449f0c0eef80e8c72c6b492d18"} Jan 26 21:15:29 crc kubenswrapper[4899]: I0126 21:15:29.103807 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-api-1" podStartSLOduration=3.103768253 podStartE2EDuration="3.103768253s" podCreationTimestamp="2026-01-26 21:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:15:29.100423527 +0000 UTC m=+1218.482011554" watchObservedRunningTime="2026-01-26 21:15:29.103768253 +0000 UTC m=+1218.485356290" Jan 26 21:15:30 crc kubenswrapper[4899]: I0126 21:15:30.094159 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:49 crc kubenswrapper[4899]: I0126 21:15:49.143058 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:49 crc kubenswrapper[4899]: I0126 21:15:49.190465 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:49 crc kubenswrapper[4899]: I0126 21:15:49.967521 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-api-2"] Jan 26 21:15:49 crc kubenswrapper[4899]: I0126 21:15:49.968069 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-api-2" podUID="ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" containerName="manila-api-log" containerID="cri-o://3290666fc97f34ea28d861f73b18b97b4401b5fd1f7419b57a903e178c57c3c3" gracePeriod=30 Jan 26 21:15:49 crc kubenswrapper[4899]: I0126 21:15:49.968129 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-api-2" podUID="ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" containerName="manila-api" containerID="cri-o://4743075ccbe0b88922959fbe1d6df769e94defa365938199e2012cad801f06a6" gracePeriod=30 Jan 26 21:15:49 crc kubenswrapper[4899]: I0126 21:15:49.992521 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-api-1"] Jan 26 21:15:49 crc kubenswrapper[4899]: I0126 21:15:49.992795 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-api-1" podUID="a25996ab-6de5-4fff-9814-81fab296bfba" containerName="manila-api-log" containerID="cri-o://b7fce5688b8a06d3b98b82eb69272224291e6cf3348c817068fe3407724e5500" gracePeriod=30 Jan 26 21:15:49 crc kubenswrapper[4899]: I0126 21:15:49.992870 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-api-1" podUID="a25996ab-6de5-4fff-9814-81fab296bfba" containerName="manila-api" containerID="cri-o://5ec3a7b7f04a12b4ceabbd84a21092d7216b88449f0c0eef80e8c72c6b492d18" gracePeriod=30 Jan 26 21:15:50 crc kubenswrapper[4899]: I0126 21:15:50.282675 4899 generic.go:334] "Generic (PLEG): container finished" podID="a25996ab-6de5-4fff-9814-81fab296bfba" containerID="b7fce5688b8a06d3b98b82eb69272224291e6cf3348c817068fe3407724e5500" exitCode=143 Jan 26 21:15:50 crc kubenswrapper[4899]: I0126 21:15:50.282781 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-1" event={"ID":"a25996ab-6de5-4fff-9814-81fab296bfba","Type":"ContainerDied","Data":"b7fce5688b8a06d3b98b82eb69272224291e6cf3348c817068fe3407724e5500"} Jan 26 21:15:50 crc kubenswrapper[4899]: I0126 21:15:50.285000 4899 generic.go:334] "Generic (PLEG): container finished" podID="ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" containerID="3290666fc97f34ea28d861f73b18b97b4401b5fd1f7419b57a903e178c57c3c3" exitCode=143 Jan 26 21:15:50 crc kubenswrapper[4899]: I0126 21:15:50.285074 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-2" event={"ID":"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5","Type":"ContainerDied","Data":"3290666fc97f34ea28d861f73b18b97b4401b5fd1f7419b57a903e178c57c3c3"} Jan 26 21:15:53 crc kubenswrapper[4899]: I0126 21:15:53.762447 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:53 crc kubenswrapper[4899]: I0126 21:15:53.903986 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-etc-machine-id\") pod \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " Jan 26 21:15:53 crc kubenswrapper[4899]: I0126 21:15:53.904132 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-config-data\") pod \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " Jan 26 21:15:53 crc kubenswrapper[4899]: I0126 21:15:53.904190 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqdt8\" (UniqueName: \"kubernetes.io/projected/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-kube-api-access-cqdt8\") pod \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " Jan 26 21:15:53 crc kubenswrapper[4899]: I0126 21:15:53.904232 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-scripts\") pod \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " Jan 26 21:15:53 crc kubenswrapper[4899]: I0126 21:15:53.904241 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" (UID: "ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:15:53 crc kubenswrapper[4899]: I0126 21:15:53.904290 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-config-data-custom\") pod \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " Jan 26 21:15:53 crc kubenswrapper[4899]: I0126 21:15:53.904358 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-logs\") pod \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\" (UID: \"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5\") " Jan 26 21:15:53 crc kubenswrapper[4899]: I0126 21:15:53.904689 4899 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 21:15:53 crc kubenswrapper[4899]: I0126 21:15:53.905193 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-logs" (OuterVolumeSpecName: "logs") pod "ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" (UID: "ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:15:53 crc kubenswrapper[4899]: I0126 21:15:53.912345 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-scripts" (OuterVolumeSpecName: "scripts") pod "ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" (UID: "ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:15:53 crc kubenswrapper[4899]: I0126 21:15:53.924670 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" (UID: "ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:15:53 crc kubenswrapper[4899]: I0126 21:15:53.936439 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-kube-api-access-cqdt8" (OuterVolumeSpecName: "kube-api-access-cqdt8") pod "ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" (UID: "ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5"). InnerVolumeSpecName "kube-api-access-cqdt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:15:53 crc kubenswrapper[4899]: I0126 21:15:53.951815 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-config-data" (OuterVolumeSpecName: "config-data") pod "ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" (UID: "ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.006313 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.006354 4899 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-logs\") on node \"crc\" DevicePath \"\"" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.006366 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.006378 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqdt8\" (UniqueName: \"kubernetes.io/projected/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-kube-api-access-cqdt8\") on node \"crc\" DevicePath \"\"" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.006389 4899 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.055949 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.209664 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjdwf\" (UniqueName: \"kubernetes.io/projected/a25996ab-6de5-4fff-9814-81fab296bfba-kube-api-access-fjdwf\") pod \"a25996ab-6de5-4fff-9814-81fab296bfba\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.209720 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-scripts\") pod \"a25996ab-6de5-4fff-9814-81fab296bfba\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.209763 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a25996ab-6de5-4fff-9814-81fab296bfba-etc-machine-id\") pod \"a25996ab-6de5-4fff-9814-81fab296bfba\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.209799 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a25996ab-6de5-4fff-9814-81fab296bfba-logs\") pod \"a25996ab-6de5-4fff-9814-81fab296bfba\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.209962 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-config-data\") pod \"a25996ab-6de5-4fff-9814-81fab296bfba\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.209899 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a25996ab-6de5-4fff-9814-81fab296bfba-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a25996ab-6de5-4fff-9814-81fab296bfba" (UID: "a25996ab-6de5-4fff-9814-81fab296bfba"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.210012 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-config-data-custom\") pod \"a25996ab-6de5-4fff-9814-81fab296bfba\" (UID: \"a25996ab-6de5-4fff-9814-81fab296bfba\") " Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.210348 4899 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a25996ab-6de5-4fff-9814-81fab296bfba-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.210352 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25996ab-6de5-4fff-9814-81fab296bfba-logs" (OuterVolumeSpecName: "logs") pod "a25996ab-6de5-4fff-9814-81fab296bfba" (UID: "a25996ab-6de5-4fff-9814-81fab296bfba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.212737 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a25996ab-6de5-4fff-9814-81fab296bfba" (UID: "a25996ab-6de5-4fff-9814-81fab296bfba"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.213212 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25996ab-6de5-4fff-9814-81fab296bfba-kube-api-access-fjdwf" (OuterVolumeSpecName: "kube-api-access-fjdwf") pod "a25996ab-6de5-4fff-9814-81fab296bfba" (UID: "a25996ab-6de5-4fff-9814-81fab296bfba"). InnerVolumeSpecName "kube-api-access-fjdwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.213541 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-scripts" (OuterVolumeSpecName: "scripts") pod "a25996ab-6de5-4fff-9814-81fab296bfba" (UID: "a25996ab-6de5-4fff-9814-81fab296bfba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.246478 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-config-data" (OuterVolumeSpecName: "config-data") pod "a25996ab-6de5-4fff-9814-81fab296bfba" (UID: "a25996ab-6de5-4fff-9814-81fab296bfba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.311980 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.312013 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.312036 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjdwf\" (UniqueName: \"kubernetes.io/projected/a25996ab-6de5-4fff-9814-81fab296bfba-kube-api-access-fjdwf\") on node \"crc\" DevicePath \"\"" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.312050 4899 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25996ab-6de5-4fff-9814-81fab296bfba-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.312062 4899 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a25996ab-6de5-4fff-9814-81fab296bfba-logs\") on node \"crc\" DevicePath \"\"" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.316943 4899 generic.go:334] "Generic (PLEG): container finished" podID="ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" containerID="4743075ccbe0b88922959fbe1d6df769e94defa365938199e2012cad801f06a6" exitCode=0 Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.317025 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-2" event={"ID":"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5","Type":"ContainerDied","Data":"4743075ccbe0b88922959fbe1d6df769e94defa365938199e2012cad801f06a6"} Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.317060 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-2" event={"ID":"ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5","Type":"ContainerDied","Data":"c33e120e626b89734d10943ea317efc8abe6d3758d78a38ecc5ddf3a2fceaacb"} Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.317079 4899 scope.go:117] "RemoveContainer" containerID="4743075ccbe0b88922959fbe1d6df769e94defa365938199e2012cad801f06a6" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.317595 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-2" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.325523 4899 generic.go:334] "Generic (PLEG): container finished" podID="a25996ab-6de5-4fff-9814-81fab296bfba" containerID="5ec3a7b7f04a12b4ceabbd84a21092d7216b88449f0c0eef80e8c72c6b492d18" exitCode=0 Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.325565 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-1" event={"ID":"a25996ab-6de5-4fff-9814-81fab296bfba","Type":"ContainerDied","Data":"5ec3a7b7f04a12b4ceabbd84a21092d7216b88449f0c0eef80e8c72c6b492d18"} Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.325595 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-1" event={"ID":"a25996ab-6de5-4fff-9814-81fab296bfba","Type":"ContainerDied","Data":"03f20b1a4e4a98831274f7650fd9087821559499ae423a3b952eba8ea95ce424"} Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.325652 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-1" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.373641 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-api-1"] Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.382997 4899 scope.go:117] "RemoveContainer" containerID="3290666fc97f34ea28d861f73b18b97b4401b5fd1f7419b57a903e178c57c3c3" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.389593 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-api-1"] Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.395884 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-api-2"] Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.399014 4899 scope.go:117] "RemoveContainer" containerID="4743075ccbe0b88922959fbe1d6df769e94defa365938199e2012cad801f06a6" Jan 26 21:15:54 crc kubenswrapper[4899]: E0126 21:15:54.399431 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4743075ccbe0b88922959fbe1d6df769e94defa365938199e2012cad801f06a6\": container with ID starting with 4743075ccbe0b88922959fbe1d6df769e94defa365938199e2012cad801f06a6 not found: ID does not exist" containerID="4743075ccbe0b88922959fbe1d6df769e94defa365938199e2012cad801f06a6" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.399496 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4743075ccbe0b88922959fbe1d6df769e94defa365938199e2012cad801f06a6"} err="failed to get container status \"4743075ccbe0b88922959fbe1d6df769e94defa365938199e2012cad801f06a6\": rpc error: code = NotFound desc = could not find container \"4743075ccbe0b88922959fbe1d6df769e94defa365938199e2012cad801f06a6\": container with ID starting with 4743075ccbe0b88922959fbe1d6df769e94defa365938199e2012cad801f06a6 not found: ID does not exist" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.399538 4899 scope.go:117] "RemoveContainer" containerID="3290666fc97f34ea28d861f73b18b97b4401b5fd1f7419b57a903e178c57c3c3" Jan 26 21:15:54 crc kubenswrapper[4899]: E0126 21:15:54.399848 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3290666fc97f34ea28d861f73b18b97b4401b5fd1f7419b57a903e178c57c3c3\": container with ID starting with 3290666fc97f34ea28d861f73b18b97b4401b5fd1f7419b57a903e178c57c3c3 not found: ID does not exist" containerID="3290666fc97f34ea28d861f73b18b97b4401b5fd1f7419b57a903e178c57c3c3" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.399888 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3290666fc97f34ea28d861f73b18b97b4401b5fd1f7419b57a903e178c57c3c3"} err="failed to get container status \"3290666fc97f34ea28d861f73b18b97b4401b5fd1f7419b57a903e178c57c3c3\": rpc error: code = NotFound desc = could not find container \"3290666fc97f34ea28d861f73b18b97b4401b5fd1f7419b57a903e178c57c3c3\": container with ID starting with 3290666fc97f34ea28d861f73b18b97b4401b5fd1f7419b57a903e178c57c3c3 not found: ID does not exist" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.399906 4899 scope.go:117] "RemoveContainer" containerID="5ec3a7b7f04a12b4ceabbd84a21092d7216b88449f0c0eef80e8c72c6b492d18" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.401476 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-api-2"] Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.421040 4899 scope.go:117] "RemoveContainer" containerID="b7fce5688b8a06d3b98b82eb69272224291e6cf3348c817068fe3407724e5500" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.435290 4899 scope.go:117] "RemoveContainer" containerID="5ec3a7b7f04a12b4ceabbd84a21092d7216b88449f0c0eef80e8c72c6b492d18" Jan 26 21:15:54 crc kubenswrapper[4899]: E0126 21:15:54.435689 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ec3a7b7f04a12b4ceabbd84a21092d7216b88449f0c0eef80e8c72c6b492d18\": container with ID starting with 5ec3a7b7f04a12b4ceabbd84a21092d7216b88449f0c0eef80e8c72c6b492d18 not found: ID does not exist" containerID="5ec3a7b7f04a12b4ceabbd84a21092d7216b88449f0c0eef80e8c72c6b492d18" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.435736 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ec3a7b7f04a12b4ceabbd84a21092d7216b88449f0c0eef80e8c72c6b492d18"} err="failed to get container status \"5ec3a7b7f04a12b4ceabbd84a21092d7216b88449f0c0eef80e8c72c6b492d18\": rpc error: code = NotFound desc = could not find container \"5ec3a7b7f04a12b4ceabbd84a21092d7216b88449f0c0eef80e8c72c6b492d18\": container with ID starting with 5ec3a7b7f04a12b4ceabbd84a21092d7216b88449f0c0eef80e8c72c6b492d18 not found: ID does not exist" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.435772 4899 scope.go:117] "RemoveContainer" containerID="b7fce5688b8a06d3b98b82eb69272224291e6cf3348c817068fe3407724e5500" Jan 26 21:15:54 crc kubenswrapper[4899]: E0126 21:15:54.436207 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7fce5688b8a06d3b98b82eb69272224291e6cf3348c817068fe3407724e5500\": container with ID starting with b7fce5688b8a06d3b98b82eb69272224291e6cf3348c817068fe3407724e5500 not found: ID does not exist" containerID="b7fce5688b8a06d3b98b82eb69272224291e6cf3348c817068fe3407724e5500" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.436232 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7fce5688b8a06d3b98b82eb69272224291e6cf3348c817068fe3407724e5500"} err="failed to get container status \"b7fce5688b8a06d3b98b82eb69272224291e6cf3348c817068fe3407724e5500\": rpc error: code = NotFound desc = could not find container \"b7fce5688b8a06d3b98b82eb69272224291e6cf3348c817068fe3407724e5500\": container with ID starting with b7fce5688b8a06d3b98b82eb69272224291e6cf3348c817068fe3407724e5500 not found: ID does not exist" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.941997 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25996ab-6de5-4fff-9814-81fab296bfba" path="/var/lib/kubelet/pods/a25996ab-6de5-4fff-9814-81fab296bfba/volumes" Jan 26 21:15:54 crc kubenswrapper[4899]: I0126 21:15:54.942866 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" path="/var/lib/kubelet/pods/ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5/volumes" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.703855 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-scheduler-1"] Jan 26 21:15:55 crc kubenswrapper[4899]: E0126 21:15:55.704173 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" containerName="manila-api-log" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.704187 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" containerName="manila-api-log" Jan 26 21:15:55 crc kubenswrapper[4899]: E0126 21:15:55.704210 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25996ab-6de5-4fff-9814-81fab296bfba" containerName="manila-api" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.704216 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25996ab-6de5-4fff-9814-81fab296bfba" containerName="manila-api" Jan 26 21:15:55 crc kubenswrapper[4899]: E0126 21:15:55.704225 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" containerName="manila-api" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.704232 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" containerName="manila-api" Jan 26 21:15:55 crc kubenswrapper[4899]: E0126 21:15:55.704240 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25996ab-6de5-4fff-9814-81fab296bfba" containerName="manila-api-log" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.704246 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25996ab-6de5-4fff-9814-81fab296bfba" containerName="manila-api-log" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.704366 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25996ab-6de5-4fff-9814-81fab296bfba" containerName="manila-api" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.704385 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" containerName="manila-api" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.704393 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25996ab-6de5-4fff-9814-81fab296bfba" containerName="manila-api-log" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.704404 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca9650df-33a4-4bf7-ba79-04d7ecbcf7b5" containerName="manila-api-log" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.705307 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.715360 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-scheduler-1"] Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.866849 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfnjb\" (UniqueName: \"kubernetes.io/projected/38e7eda0-e02c-4bcf-aa80-2a49a210797b-kube-api-access-sfnjb\") pod \"manila-scheduler-1\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.867439 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-config-data\") pod \"manila-scheduler-1\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.867853 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-scripts\") pod \"manila-scheduler-1\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.867903 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-config-data-custom\") pod \"manila-scheduler-1\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.868089 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38e7eda0-e02c-4bcf-aa80-2a49a210797b-etc-machine-id\") pod \"manila-scheduler-1\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.969752 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38e7eda0-e02c-4bcf-aa80-2a49a210797b-etc-machine-id\") pod \"manila-scheduler-1\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.969825 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfnjb\" (UniqueName: \"kubernetes.io/projected/38e7eda0-e02c-4bcf-aa80-2a49a210797b-kube-api-access-sfnjb\") pod \"manila-scheduler-1\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.969893 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38e7eda0-e02c-4bcf-aa80-2a49a210797b-etc-machine-id\") pod \"manila-scheduler-1\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.970302 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-config-data\") pod \"manila-scheduler-1\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.970599 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-scripts\") pod \"manila-scheduler-1\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.970622 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-config-data-custom\") pod \"manila-scheduler-1\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.976256 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-config-data-custom\") pod \"manila-scheduler-1\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.976513 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-config-data\") pod \"manila-scheduler-1\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.994681 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-scripts\") pod \"manila-scheduler-1\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:55 crc kubenswrapper[4899]: I0126 21:15:55.998370 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfnjb\" (UniqueName: \"kubernetes.io/projected/38e7eda0-e02c-4bcf-aa80-2a49a210797b-kube-api-access-sfnjb\") pod \"manila-scheduler-1\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:56 crc kubenswrapper[4899]: I0126 21:15:56.071112 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:15:56 crc kubenswrapper[4899]: I0126 21:15:56.344531 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-scheduler-1"] Jan 26 21:15:57 crc kubenswrapper[4899]: I0126 21:15:57.363864 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-1" event={"ID":"38e7eda0-e02c-4bcf-aa80-2a49a210797b","Type":"ContainerStarted","Data":"de0f135cb1c6ede63804417925eac7db2e19dc6aca42e55f3c8acb7fbd948845"} Jan 26 21:15:57 crc kubenswrapper[4899]: I0126 21:15:57.365412 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-1" event={"ID":"38e7eda0-e02c-4bcf-aa80-2a49a210797b","Type":"ContainerStarted","Data":"176b67c3951d9690d4189858d0ef46d24dcc992adc55a7cd79ecfcd58ad8e097"} Jan 26 21:15:57 crc kubenswrapper[4899]: I0126 21:15:57.365510 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-1" event={"ID":"38e7eda0-e02c-4bcf-aa80-2a49a210797b","Type":"ContainerStarted","Data":"8969d3b6839bcc6ac2a9f1bbe8f7746d24d55202fb2eda2f68db3065d38766fc"} Jan 26 21:15:57 crc kubenswrapper[4899]: I0126 21:15:57.388420 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-scheduler-1" podStartSLOduration=2.38840113 podStartE2EDuration="2.38840113s" podCreationTimestamp="2026-01-26 21:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:15:57.384384595 +0000 UTC m=+1246.765972652" watchObservedRunningTime="2026-01-26 21:15:57.38840113 +0000 UTC m=+1246.769989167" Jan 26 21:16:06 crc kubenswrapper[4899]: I0126 21:16:06.071867 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:16:07 crc kubenswrapper[4899]: I0126 21:16:07.820910 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:16:07 crc kubenswrapper[4899]: I0126 21:16:07.883635 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-scheduler-2"] Jan 26 21:16:07 crc kubenswrapper[4899]: I0126 21:16:07.885078 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:07 crc kubenswrapper[4899]: I0126 21:16:07.893909 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-scheduler-2"] Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.004015 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-config-data-custom\") pod \"manila-scheduler-2\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.004197 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-scripts\") pod \"manila-scheduler-2\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.004249 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8255ab4-49c5-4568-8e2a-c19df003bc7f-etc-machine-id\") pod \"manila-scheduler-2\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.004278 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45r5t\" (UniqueName: \"kubernetes.io/projected/c8255ab4-49c5-4568-8e2a-c19df003bc7f-kube-api-access-45r5t\") pod \"manila-scheduler-2\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.004320 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-config-data\") pod \"manila-scheduler-2\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.106144 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-scripts\") pod \"manila-scheduler-2\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.106216 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8255ab4-49c5-4568-8e2a-c19df003bc7f-etc-machine-id\") pod \"manila-scheduler-2\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.106252 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45r5t\" (UniqueName: \"kubernetes.io/projected/c8255ab4-49c5-4568-8e2a-c19df003bc7f-kube-api-access-45r5t\") pod \"manila-scheduler-2\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.106292 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-config-data\") pod \"manila-scheduler-2\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.106324 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-config-data-custom\") pod \"manila-scheduler-2\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.106902 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8255ab4-49c5-4568-8e2a-c19df003bc7f-etc-machine-id\") pod \"manila-scheduler-2\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.113597 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-config-data-custom\") pod \"manila-scheduler-2\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.113679 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-config-data\") pod \"manila-scheduler-2\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.115297 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-scripts\") pod \"manila-scheduler-2\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.125250 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45r5t\" (UniqueName: \"kubernetes.io/projected/c8255ab4-49c5-4568-8e2a-c19df003bc7f-kube-api-access-45r5t\") pod \"manila-scheduler-2\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.207133 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.436213 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-scheduler-2"] Jan 26 21:16:08 crc kubenswrapper[4899]: W0126 21:16:08.444401 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8255ab4_49c5_4568_8e2a_c19df003bc7f.slice/crio-766fc43179a093b820c792e2aa5fe18ffd3c3cd263c639da7b002c90c5cec2da WatchSource:0}: Error finding container 766fc43179a093b820c792e2aa5fe18ffd3c3cd263c639da7b002c90c5cec2da: Status 404 returned error can't find the container with id 766fc43179a093b820c792e2aa5fe18ffd3c3cd263c639da7b002c90c5cec2da Jan 26 21:16:08 crc kubenswrapper[4899]: I0126 21:16:08.461014 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-2" event={"ID":"c8255ab4-49c5-4568-8e2a-c19df003bc7f","Type":"ContainerStarted","Data":"766fc43179a093b820c792e2aa5fe18ffd3c3cd263c639da7b002c90c5cec2da"} Jan 26 21:16:09 crc kubenswrapper[4899]: I0126 21:16:09.488972 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-2" event={"ID":"c8255ab4-49c5-4568-8e2a-c19df003bc7f","Type":"ContainerStarted","Data":"f4bcddf251df464c7d1309fbb0230f9031cd0f1ad68aa887393797fa87b1e5c0"} Jan 26 21:16:09 crc kubenswrapper[4899]: I0126 21:16:09.489255 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-2" event={"ID":"c8255ab4-49c5-4568-8e2a-c19df003bc7f","Type":"ContainerStarted","Data":"dfe5d7f114cf2a9f1deba094533cf5e9a819d7cafe976e45f44bc1e26532e224"} Jan 26 21:16:09 crc kubenswrapper[4899]: I0126 21:16:09.512546 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-scheduler-2" podStartSLOduration=2.512522681 podStartE2EDuration="2.512522681s" podCreationTimestamp="2026-01-26 21:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:16:09.50722924 +0000 UTC m=+1258.888817277" watchObservedRunningTime="2026-01-26 21:16:09.512522681 +0000 UTC m=+1258.894110718" Jan 26 21:16:18 crc kubenswrapper[4899]: I0126 21:16:18.207784 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:19 crc kubenswrapper[4899]: I0126 21:16:19.962635 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:20 crc kubenswrapper[4899]: I0126 21:16:20.965705 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-db-sync-jtd8b"] Jan 26 21:16:20 crc kubenswrapper[4899]: I0126 21:16:20.977360 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-db-sync-jtd8b"] Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.052210 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.052496 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-scheduler-0" podUID="b4fd6983-8480-4ead-a384-3aaf4eba13a9" containerName="manila-scheduler" containerID="cri-o://1a3b7ce742c4ad6ed94cde899f823c98eb81d72c524242b5783c4964e6089d51" gracePeriod=30 Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.052532 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-scheduler-0" podUID="b4fd6983-8480-4ead-a384-3aaf4eba13a9" containerName="probe" containerID="cri-o://e69b6aa22198c29f8295a20aad605a558ef6f075118e91f1fae04b7548931baf" gracePeriod=30 Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.067169 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-scheduler-2"] Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.067757 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-scheduler-2" podUID="c8255ab4-49c5-4568-8e2a-c19df003bc7f" containerName="manila-scheduler" containerID="cri-o://dfe5d7f114cf2a9f1deba094533cf5e9a819d7cafe976e45f44bc1e26532e224" gracePeriod=30 Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.067871 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-scheduler-2" podUID="c8255ab4-49c5-4568-8e2a-c19df003bc7f" containerName="probe" containerID="cri-o://f4bcddf251df464c7d1309fbb0230f9031cd0f1ad68aa887393797fa87b1e5c0" gracePeriod=30 Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.080040 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-scheduler-1"] Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.080358 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-scheduler-1" podUID="38e7eda0-e02c-4bcf-aa80-2a49a210797b" containerName="manila-scheduler" containerID="cri-o://176b67c3951d9690d4189858d0ef46d24dcc992adc55a7cd79ecfcd58ad8e097" gracePeriod=30 Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.080436 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-scheduler-1" podUID="38e7eda0-e02c-4bcf-aa80-2a49a210797b" containerName="probe" containerID="cri-o://de0f135cb1c6ede63804417925eac7db2e19dc6aca42e55f3c8acb7fbd948845" gracePeriod=30 Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.106017 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.106343 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-share-share0-0" podUID="8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" containerName="manila-share" containerID="cri-o://5f9875fc9fce1b5141f2ff9f6d58007ef642b580bc0f12372548a40ad8a5be21" gracePeriod=30 Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.106817 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-share-share0-0" podUID="8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" containerName="probe" containerID="cri-o://7ab19a2ac1e7620c7f0b2ef6dbb0e3e9885598104441b3ea97182bba6ad0c7da" gracePeriod=30 Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.151609 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manilacaea-account-delete-cs7xw"] Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.152748 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manilacaea-account-delete-cs7xw" Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.169163 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manilacaea-account-delete-cs7xw"] Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.202983 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.203248 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-api-0" podUID="4144d880-4cba-40ec-afd1-e83576312122" containerName="manila-api-log" containerID="cri-o://b522f198ad54eaf6625a638155ec13a0cfbb72e1ddb9e74ebe7dae67c1fef2a2" gracePeriod=30 Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.203710 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-api-0" podUID="4144d880-4cba-40ec-afd1-e83576312122" containerName="manila-api" containerID="cri-o://600c3d3d8e429160f330fb905bb757738decba3ab0664d72e7acbaabe867b43b" gracePeriod=30 Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.204724 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fjgg\" (UniqueName: \"kubernetes.io/projected/e5c81d9f-4949-4483-894c-567bab067977-kube-api-access-4fjgg\") pod \"manilacaea-account-delete-cs7xw\" (UID: \"e5c81d9f-4949-4483-894c-567bab067977\") " pod="manila-kuttl-tests/manilacaea-account-delete-cs7xw" Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.204806 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c81d9f-4949-4483-894c-567bab067977-operator-scripts\") pod \"manilacaea-account-delete-cs7xw\" (UID: \"e5c81d9f-4949-4483-894c-567bab067977\") " pod="manila-kuttl-tests/manilacaea-account-delete-cs7xw" Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.307519 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c81d9f-4949-4483-894c-567bab067977-operator-scripts\") pod \"manilacaea-account-delete-cs7xw\" (UID: \"e5c81d9f-4949-4483-894c-567bab067977\") " pod="manila-kuttl-tests/manilacaea-account-delete-cs7xw" Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.307631 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fjgg\" (UniqueName: \"kubernetes.io/projected/e5c81d9f-4949-4483-894c-567bab067977-kube-api-access-4fjgg\") pod \"manilacaea-account-delete-cs7xw\" (UID: \"e5c81d9f-4949-4483-894c-567bab067977\") " pod="manila-kuttl-tests/manilacaea-account-delete-cs7xw" Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.308535 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c81d9f-4949-4483-894c-567bab067977-operator-scripts\") pod \"manilacaea-account-delete-cs7xw\" (UID: \"e5c81d9f-4949-4483-894c-567bab067977\") " pod="manila-kuttl-tests/manilacaea-account-delete-cs7xw" Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.346865 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fjgg\" (UniqueName: \"kubernetes.io/projected/e5c81d9f-4949-4483-894c-567bab067977-kube-api-access-4fjgg\") pod \"manilacaea-account-delete-cs7xw\" (UID: \"e5c81d9f-4949-4483-894c-567bab067977\") " pod="manila-kuttl-tests/manilacaea-account-delete-cs7xw" Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.486886 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manilacaea-account-delete-cs7xw" Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.597271 4899 generic.go:334] "Generic (PLEG): container finished" podID="4144d880-4cba-40ec-afd1-e83576312122" containerID="b522f198ad54eaf6625a638155ec13a0cfbb72e1ddb9e74ebe7dae67c1fef2a2" exitCode=143 Jan 26 21:16:21 crc kubenswrapper[4899]: I0126 21:16:21.597348 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"4144d880-4cba-40ec-afd1-e83576312122","Type":"ContainerDied","Data":"b522f198ad54eaf6625a638155ec13a0cfbb72e1ddb9e74ebe7dae67c1fef2a2"} Jan 26 21:16:22 crc kubenswrapper[4899]: I0126 21:16:22.042159 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manilacaea-account-delete-cs7xw"] Jan 26 21:16:22 crc kubenswrapper[4899]: I0126 21:16:22.618901 4899 generic.go:334] "Generic (PLEG): container finished" podID="c8255ab4-49c5-4568-8e2a-c19df003bc7f" containerID="f4bcddf251df464c7d1309fbb0230f9031cd0f1ad68aa887393797fa87b1e5c0" exitCode=0 Jan 26 21:16:22 crc kubenswrapper[4899]: I0126 21:16:22.618962 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-2" event={"ID":"c8255ab4-49c5-4568-8e2a-c19df003bc7f","Type":"ContainerDied","Data":"f4bcddf251df464c7d1309fbb0230f9031cd0f1ad68aa887393797fa87b1e5c0"} Jan 26 21:16:22 crc kubenswrapper[4899]: I0126 21:16:22.621576 4899 generic.go:334] "Generic (PLEG): container finished" podID="38e7eda0-e02c-4bcf-aa80-2a49a210797b" containerID="de0f135cb1c6ede63804417925eac7db2e19dc6aca42e55f3c8acb7fbd948845" exitCode=0 Jan 26 21:16:22 crc kubenswrapper[4899]: I0126 21:16:22.621606 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-1" event={"ID":"38e7eda0-e02c-4bcf-aa80-2a49a210797b","Type":"ContainerDied","Data":"de0f135cb1c6ede63804417925eac7db2e19dc6aca42e55f3c8acb7fbd948845"} Jan 26 21:16:22 crc kubenswrapper[4899]: I0126 21:16:22.623581 4899 generic.go:334] "Generic (PLEG): container finished" podID="8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" containerID="5f9875fc9fce1b5141f2ff9f6d58007ef642b580bc0f12372548a40ad8a5be21" exitCode=1 Jan 26 21:16:22 crc kubenswrapper[4899]: I0126 21:16:22.623647 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6","Type":"ContainerDied","Data":"5f9875fc9fce1b5141f2ff9f6d58007ef642b580bc0f12372548a40ad8a5be21"} Jan 26 21:16:22 crc kubenswrapper[4899]: I0126 21:16:22.624862 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manilacaea-account-delete-cs7xw" event={"ID":"e5c81d9f-4949-4483-894c-567bab067977","Type":"ContainerStarted","Data":"8fae8528a36b4647f039aa9d75914fde79b8eba80de4e98ce35636cc8d751da7"} Jan 26 21:16:22 crc kubenswrapper[4899]: I0126 21:16:22.940862 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e0a325c-b753-4730-aba3-4c0b59e79b43" path="/var/lib/kubelet/pods/5e0a325c-b753-4730-aba3-4c0b59e79b43/volumes" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.631001 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.634212 4899 generic.go:334] "Generic (PLEG): container finished" podID="e5c81d9f-4949-4483-894c-567bab067977" containerID="2acb63eddfe0cfb8110d660fd1bf7d6e2e57e0b611230af4b427404ece33b8c3" exitCode=0 Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.634305 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manilacaea-account-delete-cs7xw" event={"ID":"e5c81d9f-4949-4483-894c-567bab067977","Type":"ContainerDied","Data":"2acb63eddfe0cfb8110d660fd1bf7d6e2e57e0b611230af4b427404ece33b8c3"} Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.636772 4899 generic.go:334] "Generic (PLEG): container finished" podID="b4fd6983-8480-4ead-a384-3aaf4eba13a9" containerID="e69b6aa22198c29f8295a20aad605a558ef6f075118e91f1fae04b7548931baf" exitCode=0 Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.636841 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"b4fd6983-8480-4ead-a384-3aaf4eba13a9","Type":"ContainerDied","Data":"e69b6aa22198c29f8295a20aad605a558ef6f075118e91f1fae04b7548931baf"} Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.638688 4899 generic.go:334] "Generic (PLEG): container finished" podID="8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" containerID="7ab19a2ac1e7620c7f0b2ef6dbb0e3e9885598104441b3ea97182bba6ad0c7da" exitCode=0 Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.638721 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6","Type":"ContainerDied","Data":"7ab19a2ac1e7620c7f0b2ef6dbb0e3e9885598104441b3ea97182bba6ad0c7da"} Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.638769 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6","Type":"ContainerDied","Data":"982dfcadc89032738565a279f978e5d749156fc1a7392976f2fccd8adf351074"} Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.638773 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.638842 4899 scope.go:117] "RemoveContainer" containerID="7ab19a2ac1e7620c7f0b2ef6dbb0e3e9885598104441b3ea97182bba6ad0c7da" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.647515 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-config-data\") pod \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.647586 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-config-data-custom\") pod \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.647660 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-scripts\") pod \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.647687 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-var-lib-manila\") pod \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.647703 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-ceph\") pod \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.647780 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-etc-machine-id\") pod \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.647808 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx2wc\" (UniqueName: \"kubernetes.io/projected/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-kube-api-access-gx2wc\") pod \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\" (UID: \"8abfb846-c3d4-4b58-bd5b-d56a368d9ec6\") " Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.648941 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" (UID: "8abfb846-c3d4-4b58-bd5b-d56a368d9ec6"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.651022 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" (UID: "8abfb846-c3d4-4b58-bd5b-d56a368d9ec6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.655533 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" (UID: "8abfb846-c3d4-4b58-bd5b-d56a368d9ec6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.658012 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-ceph" (OuterVolumeSpecName: "ceph") pod "8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" (UID: "8abfb846-c3d4-4b58-bd5b-d56a368d9ec6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.659413 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-scripts" (OuterVolumeSpecName: "scripts") pod "8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" (UID: "8abfb846-c3d4-4b58-bd5b-d56a368d9ec6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.664613 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-kube-api-access-gx2wc" (OuterVolumeSpecName: "kube-api-access-gx2wc") pod "8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" (UID: "8abfb846-c3d4-4b58-bd5b-d56a368d9ec6"). InnerVolumeSpecName "kube-api-access-gx2wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.666662 4899 scope.go:117] "RemoveContainer" containerID="5f9875fc9fce1b5141f2ff9f6d58007ef642b580bc0f12372548a40ad8a5be21" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.723998 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-config-data" (OuterVolumeSpecName: "config-data") pod "8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" (UID: "8abfb846-c3d4-4b58-bd5b-d56a368d9ec6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.749413 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.749458 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.749475 4899 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.749486 4899 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.749499 4899 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.749509 4899 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.749522 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx2wc\" (UniqueName: \"kubernetes.io/projected/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6-kube-api-access-gx2wc\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.751436 4899 scope.go:117] "RemoveContainer" containerID="7ab19a2ac1e7620c7f0b2ef6dbb0e3e9885598104441b3ea97182bba6ad0c7da" Jan 26 21:16:23 crc kubenswrapper[4899]: E0126 21:16:23.751944 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ab19a2ac1e7620c7f0b2ef6dbb0e3e9885598104441b3ea97182bba6ad0c7da\": container with ID starting with 7ab19a2ac1e7620c7f0b2ef6dbb0e3e9885598104441b3ea97182bba6ad0c7da not found: ID does not exist" containerID="7ab19a2ac1e7620c7f0b2ef6dbb0e3e9885598104441b3ea97182bba6ad0c7da" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.751986 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ab19a2ac1e7620c7f0b2ef6dbb0e3e9885598104441b3ea97182bba6ad0c7da"} err="failed to get container status \"7ab19a2ac1e7620c7f0b2ef6dbb0e3e9885598104441b3ea97182bba6ad0c7da\": rpc error: code = NotFound desc = could not find container \"7ab19a2ac1e7620c7f0b2ef6dbb0e3e9885598104441b3ea97182bba6ad0c7da\": container with ID starting with 7ab19a2ac1e7620c7f0b2ef6dbb0e3e9885598104441b3ea97182bba6ad0c7da not found: ID does not exist" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.752016 4899 scope.go:117] "RemoveContainer" containerID="5f9875fc9fce1b5141f2ff9f6d58007ef642b580bc0f12372548a40ad8a5be21" Jan 26 21:16:23 crc kubenswrapper[4899]: E0126 21:16:23.752664 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f9875fc9fce1b5141f2ff9f6d58007ef642b580bc0f12372548a40ad8a5be21\": container with ID starting with 5f9875fc9fce1b5141f2ff9f6d58007ef642b580bc0f12372548a40ad8a5be21 not found: ID does not exist" containerID="5f9875fc9fce1b5141f2ff9f6d58007ef642b580bc0f12372548a40ad8a5be21" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.752717 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f9875fc9fce1b5141f2ff9f6d58007ef642b580bc0f12372548a40ad8a5be21"} err="failed to get container status \"5f9875fc9fce1b5141f2ff9f6d58007ef642b580bc0f12372548a40ad8a5be21\": rpc error: code = NotFound desc = could not find container \"5f9875fc9fce1b5141f2ff9f6d58007ef642b580bc0f12372548a40ad8a5be21\": container with ID starting with 5f9875fc9fce1b5141f2ff9f6d58007ef642b580bc0f12372548a40ad8a5be21 not found: ID does not exist" Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.968018 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:16:23 crc kubenswrapper[4899]: I0126 21:16:23.973383 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:16:24 crc kubenswrapper[4899]: I0126 21:16:24.655122 4899 generic.go:334] "Generic (PLEG): container finished" podID="38e7eda0-e02c-4bcf-aa80-2a49a210797b" containerID="176b67c3951d9690d4189858d0ef46d24dcc992adc55a7cd79ecfcd58ad8e097" exitCode=0 Jan 26 21:16:24 crc kubenswrapper[4899]: I0126 21:16:24.655165 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-1" event={"ID":"38e7eda0-e02c-4bcf-aa80-2a49a210797b","Type":"ContainerDied","Data":"176b67c3951d9690d4189858d0ef46d24dcc992adc55a7cd79ecfcd58ad8e097"} Jan 26 21:16:24 crc kubenswrapper[4899]: I0126 21:16:24.941825 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" path="/var/lib/kubelet/pods/8abfb846-c3d4-4b58-bd5b-d56a368d9ec6/volumes" Jan 26 21:16:24 crc kubenswrapper[4899]: I0126 21:16:24.942096 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:16:24 crc kubenswrapper[4899]: I0126 21:16:24.977629 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38e7eda0-e02c-4bcf-aa80-2a49a210797b-etc-machine-id\") pod \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " Jan 26 21:16:24 crc kubenswrapper[4899]: I0126 21:16:24.977736 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-config-data\") pod \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " Jan 26 21:16:24 crc kubenswrapper[4899]: I0126 21:16:24.977822 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-config-data-custom\") pod \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " Jan 26 21:16:24 crc kubenswrapper[4899]: I0126 21:16:24.977857 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-scripts\") pod \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " Jan 26 21:16:24 crc kubenswrapper[4899]: I0126 21:16:24.977905 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfnjb\" (UniqueName: \"kubernetes.io/projected/38e7eda0-e02c-4bcf-aa80-2a49a210797b-kube-api-access-sfnjb\") pod \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\" (UID: \"38e7eda0-e02c-4bcf-aa80-2a49a210797b\") " Jan 26 21:16:24 crc kubenswrapper[4899]: I0126 21:16:24.980819 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38e7eda0-e02c-4bcf-aa80-2a49a210797b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "38e7eda0-e02c-4bcf-aa80-2a49a210797b" (UID: "38e7eda0-e02c-4bcf-aa80-2a49a210797b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:16:24 crc kubenswrapper[4899]: I0126 21:16:24.985991 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e7eda0-e02c-4bcf-aa80-2a49a210797b-kube-api-access-sfnjb" (OuterVolumeSpecName: "kube-api-access-sfnjb") pod "38e7eda0-e02c-4bcf-aa80-2a49a210797b" (UID: "38e7eda0-e02c-4bcf-aa80-2a49a210797b"). InnerVolumeSpecName "kube-api-access-sfnjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:16:24 crc kubenswrapper[4899]: I0126 21:16:24.987104 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-scripts" (OuterVolumeSpecName: "scripts") pod "38e7eda0-e02c-4bcf-aa80-2a49a210797b" (UID: "38e7eda0-e02c-4bcf-aa80-2a49a210797b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:24 crc kubenswrapper[4899]: I0126 21:16:24.988074 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "38e7eda0-e02c-4bcf-aa80-2a49a210797b" (UID: "38e7eda0-e02c-4bcf-aa80-2a49a210797b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:24 crc kubenswrapper[4899]: I0126 21:16:24.995683 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manilacaea-account-delete-cs7xw" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.057343 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.069307 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-config-data" (OuterVolumeSpecName: "config-data") pod "38e7eda0-e02c-4bcf-aa80-2a49a210797b" (UID: "38e7eda0-e02c-4bcf-aa80-2a49a210797b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.080562 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-scripts\") pod \"4144d880-4cba-40ec-afd1-e83576312122\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.080667 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-config-data\") pod \"4144d880-4cba-40ec-afd1-e83576312122\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.080694 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c81d9f-4949-4483-894c-567bab067977-operator-scripts\") pod \"e5c81d9f-4949-4483-894c-567bab067977\" (UID: \"e5c81d9f-4949-4483-894c-567bab067977\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.080720 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4144d880-4cba-40ec-afd1-e83576312122-etc-machine-id\") pod \"4144d880-4cba-40ec-afd1-e83576312122\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.080775 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fjgg\" (UniqueName: \"kubernetes.io/projected/e5c81d9f-4949-4483-894c-567bab067977-kube-api-access-4fjgg\") pod \"e5c81d9f-4949-4483-894c-567bab067977\" (UID: \"e5c81d9f-4949-4483-894c-567bab067977\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.080805 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-config-data-custom\") pod \"4144d880-4cba-40ec-afd1-e83576312122\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.080832 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j9sl\" (UniqueName: \"kubernetes.io/projected/4144d880-4cba-40ec-afd1-e83576312122-kube-api-access-6j9sl\") pod \"4144d880-4cba-40ec-afd1-e83576312122\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.080880 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4144d880-4cba-40ec-afd1-e83576312122-logs\") pod \"4144d880-4cba-40ec-afd1-e83576312122\" (UID: \"4144d880-4cba-40ec-afd1-e83576312122\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.081190 4899 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38e7eda0-e02c-4bcf-aa80-2a49a210797b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.081202 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.081212 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.081221 4899 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e7eda0-e02c-4bcf-aa80-2a49a210797b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.081231 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfnjb\" (UniqueName: \"kubernetes.io/projected/38e7eda0-e02c-4bcf-aa80-2a49a210797b-kube-api-access-sfnjb\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.081545 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4144d880-4cba-40ec-afd1-e83576312122-logs" (OuterVolumeSpecName: "logs") pod "4144d880-4cba-40ec-afd1-e83576312122" (UID: "4144d880-4cba-40ec-afd1-e83576312122"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.081542 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5c81d9f-4949-4483-894c-567bab067977-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5c81d9f-4949-4483-894c-567bab067977" (UID: "e5c81d9f-4949-4483-894c-567bab067977"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.081590 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4144d880-4cba-40ec-afd1-e83576312122-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4144d880-4cba-40ec-afd1-e83576312122" (UID: "4144d880-4cba-40ec-afd1-e83576312122"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.083975 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c81d9f-4949-4483-894c-567bab067977-kube-api-access-4fjgg" (OuterVolumeSpecName: "kube-api-access-4fjgg") pod "e5c81d9f-4949-4483-894c-567bab067977" (UID: "e5c81d9f-4949-4483-894c-567bab067977"). InnerVolumeSpecName "kube-api-access-4fjgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.084084 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4144d880-4cba-40ec-afd1-e83576312122" (UID: "4144d880-4cba-40ec-afd1-e83576312122"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.086354 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-scripts" (OuterVolumeSpecName: "scripts") pod "4144d880-4cba-40ec-afd1-e83576312122" (UID: "4144d880-4cba-40ec-afd1-e83576312122"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.088507 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4144d880-4cba-40ec-afd1-e83576312122-kube-api-access-6j9sl" (OuterVolumeSpecName: "kube-api-access-6j9sl") pod "4144d880-4cba-40ec-afd1-e83576312122" (UID: "4144d880-4cba-40ec-afd1-e83576312122"). InnerVolumeSpecName "kube-api-access-6j9sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.125731 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-config-data" (OuterVolumeSpecName: "config-data") pod "4144d880-4cba-40ec-afd1-e83576312122" (UID: "4144d880-4cba-40ec-afd1-e83576312122"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.152428 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.182688 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45r5t\" (UniqueName: \"kubernetes.io/projected/c8255ab4-49c5-4568-8e2a-c19df003bc7f-kube-api-access-45r5t\") pod \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.182758 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-config-data\") pod \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.182841 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-scripts\") pod \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.182886 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8255ab4-49c5-4568-8e2a-c19df003bc7f-etc-machine-id\") pod \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.182912 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-config-data-custom\") pod \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\" (UID: \"c8255ab4-49c5-4568-8e2a-c19df003bc7f\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.183177 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j9sl\" (UniqueName: \"kubernetes.io/projected/4144d880-4cba-40ec-afd1-e83576312122-kube-api-access-6j9sl\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.183191 4899 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4144d880-4cba-40ec-afd1-e83576312122-logs\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.183201 4899 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.183210 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.183219 4899 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c81d9f-4949-4483-894c-567bab067977-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.183227 4899 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4144d880-4cba-40ec-afd1-e83576312122-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.183238 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fjgg\" (UniqueName: \"kubernetes.io/projected/e5c81d9f-4949-4483-894c-567bab067977-kube-api-access-4fjgg\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.183246 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4144d880-4cba-40ec-afd1-e83576312122-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.183438 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8255ab4-49c5-4568-8e2a-c19df003bc7f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c8255ab4-49c5-4568-8e2a-c19df003bc7f" (UID: "c8255ab4-49c5-4568-8e2a-c19df003bc7f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.186653 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8255ab4-49c5-4568-8e2a-c19df003bc7f-kube-api-access-45r5t" (OuterVolumeSpecName: "kube-api-access-45r5t") pod "c8255ab4-49c5-4568-8e2a-c19df003bc7f" (UID: "c8255ab4-49c5-4568-8e2a-c19df003bc7f"). InnerVolumeSpecName "kube-api-access-45r5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.188164 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-scripts" (OuterVolumeSpecName: "scripts") pod "c8255ab4-49c5-4568-8e2a-c19df003bc7f" (UID: "c8255ab4-49c5-4568-8e2a-c19df003bc7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.188246 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c8255ab4-49c5-4568-8e2a-c19df003bc7f" (UID: "c8255ab4-49c5-4568-8e2a-c19df003bc7f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.229128 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.250477 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-config-data" (OuterVolumeSpecName: "config-data") pod "c8255ab4-49c5-4568-8e2a-c19df003bc7f" (UID: "c8255ab4-49c5-4568-8e2a-c19df003bc7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.284516 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bbpg\" (UniqueName: \"kubernetes.io/projected/b4fd6983-8480-4ead-a384-3aaf4eba13a9-kube-api-access-4bbpg\") pod \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.284904 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-config-data\") pod \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.284989 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-scripts\") pod \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.285055 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-config-data-custom\") pod \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.285099 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4fd6983-8480-4ead-a384-3aaf4eba13a9-etc-machine-id\") pod \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\" (UID: \"b4fd6983-8480-4ead-a384-3aaf4eba13a9\") " Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.285510 4899 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.285534 4899 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8255ab4-49c5-4568-8e2a-c19df003bc7f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.285547 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.285559 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45r5t\" (UniqueName: \"kubernetes.io/projected/c8255ab4-49c5-4568-8e2a-c19df003bc7f-kube-api-access-45r5t\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.285569 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8255ab4-49c5-4568-8e2a-c19df003bc7f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.285640 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4fd6983-8480-4ead-a384-3aaf4eba13a9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b4fd6983-8480-4ead-a384-3aaf4eba13a9" (UID: "b4fd6983-8480-4ead-a384-3aaf4eba13a9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.287910 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-scripts" (OuterVolumeSpecName: "scripts") pod "b4fd6983-8480-4ead-a384-3aaf4eba13a9" (UID: "b4fd6983-8480-4ead-a384-3aaf4eba13a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.287955 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4fd6983-8480-4ead-a384-3aaf4eba13a9-kube-api-access-4bbpg" (OuterVolumeSpecName: "kube-api-access-4bbpg") pod "b4fd6983-8480-4ead-a384-3aaf4eba13a9" (UID: "b4fd6983-8480-4ead-a384-3aaf4eba13a9"). InnerVolumeSpecName "kube-api-access-4bbpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.289630 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b4fd6983-8480-4ead-a384-3aaf4eba13a9" (UID: "b4fd6983-8480-4ead-a384-3aaf4eba13a9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.339047 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-config-data" (OuterVolumeSpecName: "config-data") pod "b4fd6983-8480-4ead-a384-3aaf4eba13a9" (UID: "b4fd6983-8480-4ead-a384-3aaf4eba13a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.386976 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bbpg\" (UniqueName: \"kubernetes.io/projected/b4fd6983-8480-4ead-a384-3aaf4eba13a9-kube-api-access-4bbpg\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.387025 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.387039 4899 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.387050 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4fd6983-8480-4ead-a384-3aaf4eba13a9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.387062 4899 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4fd6983-8480-4ead-a384-3aaf4eba13a9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.664071 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manilacaea-account-delete-cs7xw" event={"ID":"e5c81d9f-4949-4483-894c-567bab067977","Type":"ContainerDied","Data":"8fae8528a36b4647f039aa9d75914fde79b8eba80de4e98ce35636cc8d751da7"} Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.664098 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manilacaea-account-delete-cs7xw" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.664113 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fae8528a36b4647f039aa9d75914fde79b8eba80de4e98ce35636cc8d751da7" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.666268 4899 generic.go:334] "Generic (PLEG): container finished" podID="b4fd6983-8480-4ead-a384-3aaf4eba13a9" containerID="1a3b7ce742c4ad6ed94cde899f823c98eb81d72c524242b5783c4964e6089d51" exitCode=0 Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.666305 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"b4fd6983-8480-4ead-a384-3aaf4eba13a9","Type":"ContainerDied","Data":"1a3b7ce742c4ad6ed94cde899f823c98eb81d72c524242b5783c4964e6089d51"} Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.666347 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.666365 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"b4fd6983-8480-4ead-a384-3aaf4eba13a9","Type":"ContainerDied","Data":"602e3fde12f603fe0ae5bfde3b5423a0fec258e2527ff2f12a51ebb4c3ce7d78"} Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.666434 4899 scope.go:117] "RemoveContainer" containerID="e69b6aa22198c29f8295a20aad605a558ef6f075118e91f1fae04b7548931baf" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.669692 4899 generic.go:334] "Generic (PLEG): container finished" podID="4144d880-4cba-40ec-afd1-e83576312122" containerID="600c3d3d8e429160f330fb905bb757738decba3ab0664d72e7acbaabe867b43b" exitCode=0 Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.669771 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"4144d880-4cba-40ec-afd1-e83576312122","Type":"ContainerDied","Data":"600c3d3d8e429160f330fb905bb757738decba3ab0664d72e7acbaabe867b43b"} Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.669818 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"4144d880-4cba-40ec-afd1-e83576312122","Type":"ContainerDied","Data":"990f574467ae7a09c8c512e455981e3e785ededb5421f27eea317963c5d0b2fa"} Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.669819 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.672440 4899 generic.go:334] "Generic (PLEG): container finished" podID="c8255ab4-49c5-4568-8e2a-c19df003bc7f" containerID="dfe5d7f114cf2a9f1deba094533cf5e9a819d7cafe976e45f44bc1e26532e224" exitCode=0 Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.672504 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-2" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.672522 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-2" event={"ID":"c8255ab4-49c5-4568-8e2a-c19df003bc7f","Type":"ContainerDied","Data":"dfe5d7f114cf2a9f1deba094533cf5e9a819d7cafe976e45f44bc1e26532e224"} Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.672544 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-2" event={"ID":"c8255ab4-49c5-4568-8e2a-c19df003bc7f","Type":"ContainerDied","Data":"766fc43179a093b820c792e2aa5fe18ffd3c3cd263c639da7b002c90c5cec2da"} Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.674702 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-1" event={"ID":"38e7eda0-e02c-4bcf-aa80-2a49a210797b","Type":"ContainerDied","Data":"8969d3b6839bcc6ac2a9f1bbe8f7746d24d55202fb2eda2f68db3065d38766fc"} Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.674740 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-1" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.690168 4899 scope.go:117] "RemoveContainer" containerID="1a3b7ce742c4ad6ed94cde899f823c98eb81d72c524242b5783c4964e6089d51" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.714196 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.725307 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.726917 4899 scope.go:117] "RemoveContainer" containerID="e69b6aa22198c29f8295a20aad605a558ef6f075118e91f1fae04b7548931baf" Jan 26 21:16:25 crc kubenswrapper[4899]: E0126 21:16:25.727672 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e69b6aa22198c29f8295a20aad605a558ef6f075118e91f1fae04b7548931baf\": container with ID starting with e69b6aa22198c29f8295a20aad605a558ef6f075118e91f1fae04b7548931baf not found: ID does not exist" containerID="e69b6aa22198c29f8295a20aad605a558ef6f075118e91f1fae04b7548931baf" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.727717 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e69b6aa22198c29f8295a20aad605a558ef6f075118e91f1fae04b7548931baf"} err="failed to get container status \"e69b6aa22198c29f8295a20aad605a558ef6f075118e91f1fae04b7548931baf\": rpc error: code = NotFound desc = could not find container \"e69b6aa22198c29f8295a20aad605a558ef6f075118e91f1fae04b7548931baf\": container with ID starting with e69b6aa22198c29f8295a20aad605a558ef6f075118e91f1fae04b7548931baf not found: ID does not exist" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.727753 4899 scope.go:117] "RemoveContainer" containerID="1a3b7ce742c4ad6ed94cde899f823c98eb81d72c524242b5783c4964e6089d51" Jan 26 21:16:25 crc kubenswrapper[4899]: E0126 21:16:25.728343 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a3b7ce742c4ad6ed94cde899f823c98eb81d72c524242b5783c4964e6089d51\": container with ID starting with 1a3b7ce742c4ad6ed94cde899f823c98eb81d72c524242b5783c4964e6089d51 not found: ID does not exist" containerID="1a3b7ce742c4ad6ed94cde899f823c98eb81d72c524242b5783c4964e6089d51" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.728365 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a3b7ce742c4ad6ed94cde899f823c98eb81d72c524242b5783c4964e6089d51"} err="failed to get container status \"1a3b7ce742c4ad6ed94cde899f823c98eb81d72c524242b5783c4964e6089d51\": rpc error: code = NotFound desc = could not find container \"1a3b7ce742c4ad6ed94cde899f823c98eb81d72c524242b5783c4964e6089d51\": container with ID starting with 1a3b7ce742c4ad6ed94cde899f823c98eb81d72c524242b5783c4964e6089d51 not found: ID does not exist" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.728380 4899 scope.go:117] "RemoveContainer" containerID="600c3d3d8e429160f330fb905bb757738decba3ab0664d72e7acbaabe867b43b" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.730312 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.737364 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.742320 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-scheduler-1"] Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.747808 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-scheduler-1"] Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.750091 4899 scope.go:117] "RemoveContainer" containerID="b522f198ad54eaf6625a638155ec13a0cfbb72e1ddb9e74ebe7dae67c1fef2a2" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.753960 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-scheduler-2"] Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.760630 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-scheduler-2"] Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.767666 4899 scope.go:117] "RemoveContainer" containerID="600c3d3d8e429160f330fb905bb757738decba3ab0664d72e7acbaabe867b43b" Jan 26 21:16:25 crc kubenswrapper[4899]: E0126 21:16:25.768852 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"600c3d3d8e429160f330fb905bb757738decba3ab0664d72e7acbaabe867b43b\": container with ID starting with 600c3d3d8e429160f330fb905bb757738decba3ab0664d72e7acbaabe867b43b not found: ID does not exist" containerID="600c3d3d8e429160f330fb905bb757738decba3ab0664d72e7acbaabe867b43b" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.768902 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"600c3d3d8e429160f330fb905bb757738decba3ab0664d72e7acbaabe867b43b"} err="failed to get container status \"600c3d3d8e429160f330fb905bb757738decba3ab0664d72e7acbaabe867b43b\": rpc error: code = NotFound desc = could not find container \"600c3d3d8e429160f330fb905bb757738decba3ab0664d72e7acbaabe867b43b\": container with ID starting with 600c3d3d8e429160f330fb905bb757738decba3ab0664d72e7acbaabe867b43b not found: ID does not exist" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.768947 4899 scope.go:117] "RemoveContainer" containerID="b522f198ad54eaf6625a638155ec13a0cfbb72e1ddb9e74ebe7dae67c1fef2a2" Jan 26 21:16:25 crc kubenswrapper[4899]: E0126 21:16:25.769456 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b522f198ad54eaf6625a638155ec13a0cfbb72e1ddb9e74ebe7dae67c1fef2a2\": container with ID starting with b522f198ad54eaf6625a638155ec13a0cfbb72e1ddb9e74ebe7dae67c1fef2a2 not found: ID does not exist" containerID="b522f198ad54eaf6625a638155ec13a0cfbb72e1ddb9e74ebe7dae67c1fef2a2" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.769491 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b522f198ad54eaf6625a638155ec13a0cfbb72e1ddb9e74ebe7dae67c1fef2a2"} err="failed to get container status \"b522f198ad54eaf6625a638155ec13a0cfbb72e1ddb9e74ebe7dae67c1fef2a2\": rpc error: code = NotFound desc = could not find container \"b522f198ad54eaf6625a638155ec13a0cfbb72e1ddb9e74ebe7dae67c1fef2a2\": container with ID starting with b522f198ad54eaf6625a638155ec13a0cfbb72e1ddb9e74ebe7dae67c1fef2a2 not found: ID does not exist" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.769516 4899 scope.go:117] "RemoveContainer" containerID="f4bcddf251df464c7d1309fbb0230f9031cd0f1ad68aa887393797fa87b1e5c0" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.793227 4899 scope.go:117] "RemoveContainer" containerID="dfe5d7f114cf2a9f1deba094533cf5e9a819d7cafe976e45f44bc1e26532e224" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.820284 4899 scope.go:117] "RemoveContainer" containerID="f4bcddf251df464c7d1309fbb0230f9031cd0f1ad68aa887393797fa87b1e5c0" Jan 26 21:16:25 crc kubenswrapper[4899]: E0126 21:16:25.820756 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4bcddf251df464c7d1309fbb0230f9031cd0f1ad68aa887393797fa87b1e5c0\": container with ID starting with f4bcddf251df464c7d1309fbb0230f9031cd0f1ad68aa887393797fa87b1e5c0 not found: ID does not exist" containerID="f4bcddf251df464c7d1309fbb0230f9031cd0f1ad68aa887393797fa87b1e5c0" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.820800 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4bcddf251df464c7d1309fbb0230f9031cd0f1ad68aa887393797fa87b1e5c0"} err="failed to get container status \"f4bcddf251df464c7d1309fbb0230f9031cd0f1ad68aa887393797fa87b1e5c0\": rpc error: code = NotFound desc = could not find container \"f4bcddf251df464c7d1309fbb0230f9031cd0f1ad68aa887393797fa87b1e5c0\": container with ID starting with f4bcddf251df464c7d1309fbb0230f9031cd0f1ad68aa887393797fa87b1e5c0 not found: ID does not exist" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.820830 4899 scope.go:117] "RemoveContainer" containerID="dfe5d7f114cf2a9f1deba094533cf5e9a819d7cafe976e45f44bc1e26532e224" Jan 26 21:16:25 crc kubenswrapper[4899]: E0126 21:16:25.821378 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfe5d7f114cf2a9f1deba094533cf5e9a819d7cafe976e45f44bc1e26532e224\": container with ID starting with dfe5d7f114cf2a9f1deba094533cf5e9a819d7cafe976e45f44bc1e26532e224 not found: ID does not exist" containerID="dfe5d7f114cf2a9f1deba094533cf5e9a819d7cafe976e45f44bc1e26532e224" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.821427 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe5d7f114cf2a9f1deba094533cf5e9a819d7cafe976e45f44bc1e26532e224"} err="failed to get container status \"dfe5d7f114cf2a9f1deba094533cf5e9a819d7cafe976e45f44bc1e26532e224\": rpc error: code = NotFound desc = could not find container \"dfe5d7f114cf2a9f1deba094533cf5e9a819d7cafe976e45f44bc1e26532e224\": container with ID starting with dfe5d7f114cf2a9f1deba094533cf5e9a819d7cafe976e45f44bc1e26532e224 not found: ID does not exist" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.821458 4899 scope.go:117] "RemoveContainer" containerID="de0f135cb1c6ede63804417925eac7db2e19dc6aca42e55f3c8acb7fbd948845" Jan 26 21:16:25 crc kubenswrapper[4899]: I0126 21:16:25.837671 4899 scope.go:117] "RemoveContainer" containerID="176b67c3951d9690d4189858d0ef46d24dcc992adc55a7cd79ecfcd58ad8e097" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.153119 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-db-create-fs9xx"] Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.162214 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-db-create-fs9xx"] Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.167455 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manilacaea-account-delete-cs7xw"] Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.171988 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manilacaea-account-delete-cs7xw"] Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.176849 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-caea-account-create-update-smcmf"] Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.181805 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-caea-account-create-update-smcmf"] Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244005 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-db-create-tdsg2"] Jan 26 21:16:26 crc kubenswrapper[4899]: E0126 21:16:26.244272 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fd6983-8480-4ead-a384-3aaf4eba13a9" containerName="probe" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244284 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fd6983-8480-4ead-a384-3aaf4eba13a9" containerName="probe" Jan 26 21:16:26 crc kubenswrapper[4899]: E0126 21:16:26.244296 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e7eda0-e02c-4bcf-aa80-2a49a210797b" containerName="probe" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244303 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e7eda0-e02c-4bcf-aa80-2a49a210797b" containerName="probe" Jan 26 21:16:26 crc kubenswrapper[4899]: E0126 21:16:26.244314 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fd6983-8480-4ead-a384-3aaf4eba13a9" containerName="manila-scheduler" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244320 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fd6983-8480-4ead-a384-3aaf4eba13a9" containerName="manila-scheduler" Jan 26 21:16:26 crc kubenswrapper[4899]: E0126 21:16:26.244333 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4144d880-4cba-40ec-afd1-e83576312122" containerName="manila-api" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244341 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="4144d880-4cba-40ec-afd1-e83576312122" containerName="manila-api" Jan 26 21:16:26 crc kubenswrapper[4899]: E0126 21:16:26.244351 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" containerName="manila-share" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244358 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" containerName="manila-share" Jan 26 21:16:26 crc kubenswrapper[4899]: E0126 21:16:26.244367 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e7eda0-e02c-4bcf-aa80-2a49a210797b" containerName="manila-scheduler" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244373 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e7eda0-e02c-4bcf-aa80-2a49a210797b" containerName="manila-scheduler" Jan 26 21:16:26 crc kubenswrapper[4899]: E0126 21:16:26.244384 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" containerName="probe" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244389 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" containerName="probe" Jan 26 21:16:26 crc kubenswrapper[4899]: E0126 21:16:26.244395 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8255ab4-49c5-4568-8e2a-c19df003bc7f" containerName="manila-scheduler" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244401 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8255ab4-49c5-4568-8e2a-c19df003bc7f" containerName="manila-scheduler" Jan 26 21:16:26 crc kubenswrapper[4899]: E0126 21:16:26.244409 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4144d880-4cba-40ec-afd1-e83576312122" containerName="manila-api-log" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244414 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="4144d880-4cba-40ec-afd1-e83576312122" containerName="manila-api-log" Jan 26 21:16:26 crc kubenswrapper[4899]: E0126 21:16:26.244423 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c81d9f-4949-4483-894c-567bab067977" containerName="mariadb-account-delete" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244429 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c81d9f-4949-4483-894c-567bab067977" containerName="mariadb-account-delete" Jan 26 21:16:26 crc kubenswrapper[4899]: E0126 21:16:26.244439 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8255ab4-49c5-4568-8e2a-c19df003bc7f" containerName="probe" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244445 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8255ab4-49c5-4568-8e2a-c19df003bc7f" containerName="probe" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244552 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8255ab4-49c5-4568-8e2a-c19df003bc7f" containerName="manila-scheduler" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244568 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" containerName="manila-share" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244576 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e7eda0-e02c-4bcf-aa80-2a49a210797b" containerName="manila-scheduler" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244582 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8255ab4-49c5-4568-8e2a-c19df003bc7f" containerName="probe" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244594 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c81d9f-4949-4483-894c-567bab067977" containerName="mariadb-account-delete" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244603 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4fd6983-8480-4ead-a384-3aaf4eba13a9" containerName="manila-scheduler" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244610 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4fd6983-8480-4ead-a384-3aaf4eba13a9" containerName="probe" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244620 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="4144d880-4cba-40ec-afd1-e83576312122" containerName="manila-api" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244628 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="4144d880-4cba-40ec-afd1-e83576312122" containerName="manila-api-log" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244635 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e7eda0-e02c-4bcf-aa80-2a49a210797b" containerName="probe" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.244645 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="8abfb846-c3d4-4b58-bd5b-d56a368d9ec6" containerName="probe" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.245128 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-create-tdsg2" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.259888 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-db-create-tdsg2"] Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.302858 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a996441-754f-4281-91cb-92d4a14f9cb3-operator-scripts\") pod \"manila-db-create-tdsg2\" (UID: \"0a996441-754f-4281-91cb-92d4a14f9cb3\") " pod="manila-kuttl-tests/manila-db-create-tdsg2" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.303080 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9llsd\" (UniqueName: \"kubernetes.io/projected/0a996441-754f-4281-91cb-92d4a14f9cb3-kube-api-access-9llsd\") pod \"manila-db-create-tdsg2\" (UID: \"0a996441-754f-4281-91cb-92d4a14f9cb3\") " pod="manila-kuttl-tests/manila-db-create-tdsg2" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.341470 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-6447-account-create-update-8fknt"] Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.342268 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-6447-account-create-update-8fknt" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.345761 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-db-secret" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.353852 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-6447-account-create-update-8fknt"] Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.404146 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a996441-754f-4281-91cb-92d4a14f9cb3-operator-scripts\") pod \"manila-db-create-tdsg2\" (UID: \"0a996441-754f-4281-91cb-92d4a14f9cb3\") " pod="manila-kuttl-tests/manila-db-create-tdsg2" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.404238 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df2a8fab-05f9-4f6d-adeb-184819d687d9-operator-scripts\") pod \"manila-6447-account-create-update-8fknt\" (UID: \"df2a8fab-05f9-4f6d-adeb-184819d687d9\") " pod="manila-kuttl-tests/manila-6447-account-create-update-8fknt" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.404272 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bskrg\" (UniqueName: \"kubernetes.io/projected/df2a8fab-05f9-4f6d-adeb-184819d687d9-kube-api-access-bskrg\") pod \"manila-6447-account-create-update-8fknt\" (UID: \"df2a8fab-05f9-4f6d-adeb-184819d687d9\") " pod="manila-kuttl-tests/manila-6447-account-create-update-8fknt" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.404295 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9llsd\" (UniqueName: \"kubernetes.io/projected/0a996441-754f-4281-91cb-92d4a14f9cb3-kube-api-access-9llsd\") pod \"manila-db-create-tdsg2\" (UID: \"0a996441-754f-4281-91cb-92d4a14f9cb3\") " pod="manila-kuttl-tests/manila-db-create-tdsg2" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.405096 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a996441-754f-4281-91cb-92d4a14f9cb3-operator-scripts\") pod \"manila-db-create-tdsg2\" (UID: \"0a996441-754f-4281-91cb-92d4a14f9cb3\") " pod="manila-kuttl-tests/manila-db-create-tdsg2" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.426242 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9llsd\" (UniqueName: \"kubernetes.io/projected/0a996441-754f-4281-91cb-92d4a14f9cb3-kube-api-access-9llsd\") pod \"manila-db-create-tdsg2\" (UID: \"0a996441-754f-4281-91cb-92d4a14f9cb3\") " pod="manila-kuttl-tests/manila-db-create-tdsg2" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.506193 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df2a8fab-05f9-4f6d-adeb-184819d687d9-operator-scripts\") pod \"manila-6447-account-create-update-8fknt\" (UID: \"df2a8fab-05f9-4f6d-adeb-184819d687d9\") " pod="manila-kuttl-tests/manila-6447-account-create-update-8fknt" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.506806 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bskrg\" (UniqueName: \"kubernetes.io/projected/df2a8fab-05f9-4f6d-adeb-184819d687d9-kube-api-access-bskrg\") pod \"manila-6447-account-create-update-8fknt\" (UID: \"df2a8fab-05f9-4f6d-adeb-184819d687d9\") " pod="manila-kuttl-tests/manila-6447-account-create-update-8fknt" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.507064 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df2a8fab-05f9-4f6d-adeb-184819d687d9-operator-scripts\") pod \"manila-6447-account-create-update-8fknt\" (UID: \"df2a8fab-05f9-4f6d-adeb-184819d687d9\") " pod="manila-kuttl-tests/manila-6447-account-create-update-8fknt" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.522393 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bskrg\" (UniqueName: \"kubernetes.io/projected/df2a8fab-05f9-4f6d-adeb-184819d687d9-kube-api-access-bskrg\") pod \"manila-6447-account-create-update-8fknt\" (UID: \"df2a8fab-05f9-4f6d-adeb-184819d687d9\") " pod="manila-kuttl-tests/manila-6447-account-create-update-8fknt" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.564024 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-create-tdsg2" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.654030 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-6447-account-create-update-8fknt" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.939103 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38e7eda0-e02c-4bcf-aa80-2a49a210797b" path="/var/lib/kubelet/pods/38e7eda0-e02c-4bcf-aa80-2a49a210797b/volumes" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.940063 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4144d880-4cba-40ec-afd1-e83576312122" path="/var/lib/kubelet/pods/4144d880-4cba-40ec-afd1-e83576312122/volumes" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.940616 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52c6dc85-792f-4c5f-9082-34a70a742114" path="/var/lib/kubelet/pods/52c6dc85-792f-4c5f-9082-34a70a742114/volumes" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.941636 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="907ae7f3-9325-49ec-a87a-ff3a39bec840" path="/var/lib/kubelet/pods/907ae7f3-9325-49ec-a87a-ff3a39bec840/volumes" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.942135 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4fd6983-8480-4ead-a384-3aaf4eba13a9" path="/var/lib/kubelet/pods/b4fd6983-8480-4ead-a384-3aaf4eba13a9/volumes" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.942721 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8255ab4-49c5-4568-8e2a-c19df003bc7f" path="/var/lib/kubelet/pods/c8255ab4-49c5-4568-8e2a-c19df003bc7f/volumes" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.943883 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5c81d9f-4949-4483-894c-567bab067977" path="/var/lib/kubelet/pods/e5c81d9f-4949-4483-894c-567bab067977/volumes" Jan 26 21:16:26 crc kubenswrapper[4899]: I0126 21:16:26.972292 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-db-create-tdsg2"] Jan 26 21:16:27 crc kubenswrapper[4899]: I0126 21:16:27.084695 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-6447-account-create-update-8fknt"] Jan 26 21:16:27 crc kubenswrapper[4899]: W0126 21:16:27.087846 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf2a8fab_05f9_4f6d_adeb_184819d687d9.slice/crio-a74df7b8f4c13c60988e63f51d2b2d46dde3161e76b3b1ec933c9136422b7c58 WatchSource:0}: Error finding container a74df7b8f4c13c60988e63f51d2b2d46dde3161e76b3b1ec933c9136422b7c58: Status 404 returned error can't find the container with id a74df7b8f4c13c60988e63f51d2b2d46dde3161e76b3b1ec933c9136422b7c58 Jan 26 21:16:27 crc kubenswrapper[4899]: I0126 21:16:27.721300 4899 generic.go:334] "Generic (PLEG): container finished" podID="df2a8fab-05f9-4f6d-adeb-184819d687d9" containerID="3b35976384a4da5da6b2567db096ec17dd80c593e6cacb25611c5c053239b1b7" exitCode=0 Jan 26 21:16:27 crc kubenswrapper[4899]: I0126 21:16:27.721363 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-6447-account-create-update-8fknt" event={"ID":"df2a8fab-05f9-4f6d-adeb-184819d687d9","Type":"ContainerDied","Data":"3b35976384a4da5da6b2567db096ec17dd80c593e6cacb25611c5c053239b1b7"} Jan 26 21:16:27 crc kubenswrapper[4899]: I0126 21:16:27.721393 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-6447-account-create-update-8fknt" event={"ID":"df2a8fab-05f9-4f6d-adeb-184819d687d9","Type":"ContainerStarted","Data":"a74df7b8f4c13c60988e63f51d2b2d46dde3161e76b3b1ec933c9136422b7c58"} Jan 26 21:16:27 crc kubenswrapper[4899]: I0126 21:16:27.723546 4899 generic.go:334] "Generic (PLEG): container finished" podID="0a996441-754f-4281-91cb-92d4a14f9cb3" containerID="40d36d698b3c1c2f803d20c6a0d155485bea9677d095e245325d7a38b08195b7" exitCode=0 Jan 26 21:16:27 crc kubenswrapper[4899]: I0126 21:16:27.723582 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-create-tdsg2" event={"ID":"0a996441-754f-4281-91cb-92d4a14f9cb3","Type":"ContainerDied","Data":"40d36d698b3c1c2f803d20c6a0d155485bea9677d095e245325d7a38b08195b7"} Jan 26 21:16:27 crc kubenswrapper[4899]: I0126 21:16:27.723640 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-create-tdsg2" event={"ID":"0a996441-754f-4281-91cb-92d4a14f9cb3","Type":"ContainerStarted","Data":"7a17c3c3e5b7c95557aa3a23926d7e13410f18fb2c0def6da826306752935701"} Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.048187 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-6447-account-create-update-8fknt" Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.060579 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-create-tdsg2" Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.144028 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bskrg\" (UniqueName: \"kubernetes.io/projected/df2a8fab-05f9-4f6d-adeb-184819d687d9-kube-api-access-bskrg\") pod \"df2a8fab-05f9-4f6d-adeb-184819d687d9\" (UID: \"df2a8fab-05f9-4f6d-adeb-184819d687d9\") " Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.144122 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df2a8fab-05f9-4f6d-adeb-184819d687d9-operator-scripts\") pod \"df2a8fab-05f9-4f6d-adeb-184819d687d9\" (UID: \"df2a8fab-05f9-4f6d-adeb-184819d687d9\") " Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.144253 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9llsd\" (UniqueName: \"kubernetes.io/projected/0a996441-754f-4281-91cb-92d4a14f9cb3-kube-api-access-9llsd\") pod \"0a996441-754f-4281-91cb-92d4a14f9cb3\" (UID: \"0a996441-754f-4281-91cb-92d4a14f9cb3\") " Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.144296 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a996441-754f-4281-91cb-92d4a14f9cb3-operator-scripts\") pod \"0a996441-754f-4281-91cb-92d4a14f9cb3\" (UID: \"0a996441-754f-4281-91cb-92d4a14f9cb3\") " Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.145581 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a996441-754f-4281-91cb-92d4a14f9cb3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0a996441-754f-4281-91cb-92d4a14f9cb3" (UID: "0a996441-754f-4281-91cb-92d4a14f9cb3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.146634 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df2a8fab-05f9-4f6d-adeb-184819d687d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df2a8fab-05f9-4f6d-adeb-184819d687d9" (UID: "df2a8fab-05f9-4f6d-adeb-184819d687d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.150761 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a996441-754f-4281-91cb-92d4a14f9cb3-kube-api-access-9llsd" (OuterVolumeSpecName: "kube-api-access-9llsd") pod "0a996441-754f-4281-91cb-92d4a14f9cb3" (UID: "0a996441-754f-4281-91cb-92d4a14f9cb3"). InnerVolumeSpecName "kube-api-access-9llsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.150817 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df2a8fab-05f9-4f6d-adeb-184819d687d9-kube-api-access-bskrg" (OuterVolumeSpecName: "kube-api-access-bskrg") pod "df2a8fab-05f9-4f6d-adeb-184819d687d9" (UID: "df2a8fab-05f9-4f6d-adeb-184819d687d9"). InnerVolumeSpecName "kube-api-access-bskrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.246160 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9llsd\" (UniqueName: \"kubernetes.io/projected/0a996441-754f-4281-91cb-92d4a14f9cb3-kube-api-access-9llsd\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.246200 4899 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a996441-754f-4281-91cb-92d4a14f9cb3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.246214 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bskrg\" (UniqueName: \"kubernetes.io/projected/df2a8fab-05f9-4f6d-adeb-184819d687d9-kube-api-access-bskrg\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.246227 4899 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df2a8fab-05f9-4f6d-adeb-184819d687d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.739376 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-create-tdsg2" event={"ID":"0a996441-754f-4281-91cb-92d4a14f9cb3","Type":"ContainerDied","Data":"7a17c3c3e5b7c95557aa3a23926d7e13410f18fb2c0def6da826306752935701"} Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.739414 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a17c3c3e5b7c95557aa3a23926d7e13410f18fb2c0def6da826306752935701" Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.739398 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-create-tdsg2" Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.741710 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-6447-account-create-update-8fknt" event={"ID":"df2a8fab-05f9-4f6d-adeb-184819d687d9","Type":"ContainerDied","Data":"a74df7b8f4c13c60988e63f51d2b2d46dde3161e76b3b1ec933c9136422b7c58"} Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.741734 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a74df7b8f4c13c60988e63f51d2b2d46dde3161e76b3b1ec933c9136422b7c58" Jan 26 21:16:29 crc kubenswrapper[4899]: I0126 21:16:29.741752 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-6447-account-create-update-8fknt" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.479833 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-db-sync-mk995"] Jan 26 21:16:31 crc kubenswrapper[4899]: E0126 21:16:31.480492 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2a8fab-05f9-4f6d-adeb-184819d687d9" containerName="mariadb-account-create-update" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.480506 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2a8fab-05f9-4f6d-adeb-184819d687d9" containerName="mariadb-account-create-update" Jan 26 21:16:31 crc kubenswrapper[4899]: E0126 21:16:31.480518 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a996441-754f-4281-91cb-92d4a14f9cb3" containerName="mariadb-database-create" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.480525 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a996441-754f-4281-91cb-92d4a14f9cb3" containerName="mariadb-database-create" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.480660 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a996441-754f-4281-91cb-92d4a14f9cb3" containerName="mariadb-database-create" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.480674 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="df2a8fab-05f9-4f6d-adeb-184819d687d9" containerName="mariadb-account-create-update" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.481182 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-sync-mk995" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.483282 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"combined-ca-bundle" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.483556 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-config-data" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.483641 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-manila-dockercfg-bbwzk" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.489479 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-db-sync-mk995"] Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.581062 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8mwl\" (UniqueName: \"kubernetes.io/projected/4b5cd91f-692c-44ef-84db-f1b54333c162-kube-api-access-k8mwl\") pod \"manila-db-sync-mk995\" (UID: \"4b5cd91f-692c-44ef-84db-f1b54333c162\") " pod="manila-kuttl-tests/manila-db-sync-mk995" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.581167 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-config-data\") pod \"manila-db-sync-mk995\" (UID: \"4b5cd91f-692c-44ef-84db-f1b54333c162\") " pod="manila-kuttl-tests/manila-db-sync-mk995" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.581226 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-job-config-data\") pod \"manila-db-sync-mk995\" (UID: \"4b5cd91f-692c-44ef-84db-f1b54333c162\") " pod="manila-kuttl-tests/manila-db-sync-mk995" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.581244 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-combined-ca-bundle\") pod \"manila-db-sync-mk995\" (UID: \"4b5cd91f-692c-44ef-84db-f1b54333c162\") " pod="manila-kuttl-tests/manila-db-sync-mk995" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.682939 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-job-config-data\") pod \"manila-db-sync-mk995\" (UID: \"4b5cd91f-692c-44ef-84db-f1b54333c162\") " pod="manila-kuttl-tests/manila-db-sync-mk995" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.682999 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-combined-ca-bundle\") pod \"manila-db-sync-mk995\" (UID: \"4b5cd91f-692c-44ef-84db-f1b54333c162\") " pod="manila-kuttl-tests/manila-db-sync-mk995" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.683046 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8mwl\" (UniqueName: \"kubernetes.io/projected/4b5cd91f-692c-44ef-84db-f1b54333c162-kube-api-access-k8mwl\") pod \"manila-db-sync-mk995\" (UID: \"4b5cd91f-692c-44ef-84db-f1b54333c162\") " pod="manila-kuttl-tests/manila-db-sync-mk995" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.683106 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-config-data\") pod \"manila-db-sync-mk995\" (UID: \"4b5cd91f-692c-44ef-84db-f1b54333c162\") " pod="manila-kuttl-tests/manila-db-sync-mk995" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.688328 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-job-config-data\") pod \"manila-db-sync-mk995\" (UID: \"4b5cd91f-692c-44ef-84db-f1b54333c162\") " pod="manila-kuttl-tests/manila-db-sync-mk995" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.688607 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-combined-ca-bundle\") pod \"manila-db-sync-mk995\" (UID: \"4b5cd91f-692c-44ef-84db-f1b54333c162\") " pod="manila-kuttl-tests/manila-db-sync-mk995" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.688684 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-config-data\") pod \"manila-db-sync-mk995\" (UID: \"4b5cd91f-692c-44ef-84db-f1b54333c162\") " pod="manila-kuttl-tests/manila-db-sync-mk995" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.702440 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8mwl\" (UniqueName: \"kubernetes.io/projected/4b5cd91f-692c-44ef-84db-f1b54333c162-kube-api-access-k8mwl\") pod \"manila-db-sync-mk995\" (UID: \"4b5cd91f-692c-44ef-84db-f1b54333c162\") " pod="manila-kuttl-tests/manila-db-sync-mk995" Jan 26 21:16:31 crc kubenswrapper[4899]: I0126 21:16:31.801755 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-sync-mk995" Jan 26 21:16:32 crc kubenswrapper[4899]: I0126 21:16:32.208603 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-db-sync-mk995"] Jan 26 21:16:32 crc kubenswrapper[4899]: I0126 21:16:32.766992 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-sync-mk995" event={"ID":"4b5cd91f-692c-44ef-84db-f1b54333c162","Type":"ContainerStarted","Data":"925593cf1093756ffc03515da9a5f83425874f840809e0baec352d91c434ee2b"} Jan 26 21:16:32 crc kubenswrapper[4899]: I0126 21:16:32.767332 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-sync-mk995" event={"ID":"4b5cd91f-692c-44ef-84db-f1b54333c162","Type":"ContainerStarted","Data":"1ef916594460580eef0c09dc52cea57e23ffe5aaadc571ac7d7dcd3e69e85a5f"} Jan 26 21:16:32 crc kubenswrapper[4899]: I0126 21:16:32.786127 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-db-sync-mk995" podStartSLOduration=1.786104714 podStartE2EDuration="1.786104714s" podCreationTimestamp="2026-01-26 21:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:16:32.780955006 +0000 UTC m=+1282.162543053" watchObservedRunningTime="2026-01-26 21:16:32.786104714 +0000 UTC m=+1282.167692751" Jan 26 21:16:34 crc kubenswrapper[4899]: I0126 21:16:34.797071 4899 generic.go:334] "Generic (PLEG): container finished" podID="4b5cd91f-692c-44ef-84db-f1b54333c162" containerID="925593cf1093756ffc03515da9a5f83425874f840809e0baec352d91c434ee2b" exitCode=0 Jan 26 21:16:34 crc kubenswrapper[4899]: I0126 21:16:34.797149 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-sync-mk995" event={"ID":"4b5cd91f-692c-44ef-84db-f1b54333c162","Type":"ContainerDied","Data":"925593cf1093756ffc03515da9a5f83425874f840809e0baec352d91c434ee2b"} Jan 26 21:16:36 crc kubenswrapper[4899]: I0126 21:16:36.047036 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-sync-mk995" Jan 26 21:16:36 crc kubenswrapper[4899]: I0126 21:16:36.146145 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-job-config-data\") pod \"4b5cd91f-692c-44ef-84db-f1b54333c162\" (UID: \"4b5cd91f-692c-44ef-84db-f1b54333c162\") " Jan 26 21:16:36 crc kubenswrapper[4899]: I0126 21:16:36.146214 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-config-data\") pod \"4b5cd91f-692c-44ef-84db-f1b54333c162\" (UID: \"4b5cd91f-692c-44ef-84db-f1b54333c162\") " Jan 26 21:16:36 crc kubenswrapper[4899]: I0126 21:16:36.146251 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8mwl\" (UniqueName: \"kubernetes.io/projected/4b5cd91f-692c-44ef-84db-f1b54333c162-kube-api-access-k8mwl\") pod \"4b5cd91f-692c-44ef-84db-f1b54333c162\" (UID: \"4b5cd91f-692c-44ef-84db-f1b54333c162\") " Jan 26 21:16:36 crc kubenswrapper[4899]: I0126 21:16:36.146301 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-combined-ca-bundle\") pod \"4b5cd91f-692c-44ef-84db-f1b54333c162\" (UID: \"4b5cd91f-692c-44ef-84db-f1b54333c162\") " Jan 26 21:16:36 crc kubenswrapper[4899]: I0126 21:16:36.151526 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "4b5cd91f-692c-44ef-84db-f1b54333c162" (UID: "4b5cd91f-692c-44ef-84db-f1b54333c162"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:36 crc kubenswrapper[4899]: I0126 21:16:36.152227 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b5cd91f-692c-44ef-84db-f1b54333c162-kube-api-access-k8mwl" (OuterVolumeSpecName: "kube-api-access-k8mwl") pod "4b5cd91f-692c-44ef-84db-f1b54333c162" (UID: "4b5cd91f-692c-44ef-84db-f1b54333c162"). InnerVolumeSpecName "kube-api-access-k8mwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:16:36 crc kubenswrapper[4899]: I0126 21:16:36.157608 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-config-data" (OuterVolumeSpecName: "config-data") pod "4b5cd91f-692c-44ef-84db-f1b54333c162" (UID: "4b5cd91f-692c-44ef-84db-f1b54333c162"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:36 crc kubenswrapper[4899]: I0126 21:16:36.165243 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b5cd91f-692c-44ef-84db-f1b54333c162" (UID: "4b5cd91f-692c-44ef-84db-f1b54333c162"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:16:36 crc kubenswrapper[4899]: I0126 21:16:36.248136 4899 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:36 crc kubenswrapper[4899]: I0126 21:16:36.248171 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:36 crc kubenswrapper[4899]: I0126 21:16:36.248181 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8mwl\" (UniqueName: \"kubernetes.io/projected/4b5cd91f-692c-44ef-84db-f1b54333c162-kube-api-access-k8mwl\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:36 crc kubenswrapper[4899]: I0126 21:16:36.248192 4899 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5cd91f-692c-44ef-84db-f1b54333c162-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 21:16:36 crc kubenswrapper[4899]: I0126 21:16:36.813696 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-sync-mk995" Jan 26 21:16:36 crc kubenswrapper[4899]: I0126 21:16:36.813640 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-sync-mk995" event={"ID":"4b5cd91f-692c-44ef-84db-f1b54333c162","Type":"ContainerDied","Data":"1ef916594460580eef0c09dc52cea57e23ffe5aaadc571ac7d7dcd3e69e85a5f"} Jan 26 21:16:36 crc kubenswrapper[4899]: I0126 21:16:36.814159 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ef916594460580eef0c09dc52cea57e23ffe5aaadc571ac7d7dcd3e69e85a5f" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.029344 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:16:37 crc kubenswrapper[4899]: E0126 21:16:37.029687 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5cd91f-692c-44ef-84db-f1b54333c162" containerName="manila-db-sync" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.029707 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5cd91f-692c-44ef-84db-f1b54333c162" containerName="manila-db-sync" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.029842 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b5cd91f-692c-44ef-84db-f1b54333c162" containerName="manila-db-sync" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.030764 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.032885 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-manila-dockercfg-bbwzk" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.034478 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-config-data" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.034672 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-scripts" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.034703 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"combined-ca-bundle" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.035356 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-scheduler-config-data" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.052445 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.059764 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-scripts\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.059821 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6254479c-5ce9-4293-a79d-bd58887b2797-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.059900 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.059945 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.059978 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfkjb\" (UniqueName: \"kubernetes.io/projected/6254479c-5ce9-4293-a79d-bd58887b2797-kube-api-access-lfkjb\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.060005 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-config-data\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.075160 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.076687 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.081660 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-share-share0-config-data" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.081888 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"ceph-conf-files" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.104854 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.161572 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.161611 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.161643 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9a87185-1adc-4de2-8bd7-8eaac51ec303-etc-machine-id\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.161665 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-config-data-custom\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.161691 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfkjb\" (UniqueName: \"kubernetes.io/projected/6254479c-5ce9-4293-a79d-bd58887b2797-kube-api-access-lfkjb\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.161710 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp5fq\" (UniqueName: \"kubernetes.io/projected/a9a87185-1adc-4de2-8bd7-8eaac51ec303-kube-api-access-mp5fq\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.161735 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-config-data\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.161898 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-scripts\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.162132 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-ceph\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.162183 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-combined-ca-bundle\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.162275 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-scripts\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.162318 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6254479c-5ce9-4293-a79d-bd58887b2797-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.162348 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-config-data\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.162368 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a9a87185-1adc-4de2-8bd7-8eaac51ec303-var-lib-manila\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.162437 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6254479c-5ce9-4293-a79d-bd58887b2797-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.165518 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.171024 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-config-data\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.172253 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.173035 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-scripts\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.185296 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfkjb\" (UniqueName: \"kubernetes.io/projected/6254479c-5ce9-4293-a79d-bd58887b2797-kube-api-access-lfkjb\") pod \"manila-scheduler-0\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.238235 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.239310 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.241428 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-api-config-data" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.241556 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"cert-manila-public-svc" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.241824 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"cert-manila-internal-svc" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.261331 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.263792 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-config-data-custom\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.263833 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvw84\" (UniqueName: \"kubernetes.io/projected/f7812d9a-1081-4b52-9a7f-da420cf3aab9-kube-api-access-bvw84\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.263865 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-ceph\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.263890 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-combined-ca-bundle\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.264034 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7812d9a-1081-4b52-9a7f-da420cf3aab9-etc-machine-id\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.264122 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-scripts\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.264169 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.264229 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-config-data\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.264260 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a9a87185-1adc-4de2-8bd7-8eaac51ec303-var-lib-manila\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.264357 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9a87185-1adc-4de2-8bd7-8eaac51ec303-etc-machine-id\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.264385 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-config-data-custom\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.264422 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-config-data\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.264454 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp5fq\" (UniqueName: \"kubernetes.io/projected/a9a87185-1adc-4de2-8bd7-8eaac51ec303-kube-api-access-mp5fq\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.264499 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-scripts\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.264546 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7812d9a-1081-4b52-9a7f-da420cf3aab9-logs\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.264582 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-public-tls-certs\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.264617 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.264802 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9a87185-1adc-4de2-8bd7-8eaac51ec303-etc-machine-id\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.265003 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a9a87185-1adc-4de2-8bd7-8eaac51ec303-var-lib-manila\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.272485 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-ceph\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.275975 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-scripts\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.276367 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-config-data\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.277212 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-combined-ca-bundle\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.279752 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-config-data-custom\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.290329 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp5fq\" (UniqueName: \"kubernetes.io/projected/a9a87185-1adc-4de2-8bd7-8eaac51ec303-kube-api-access-mp5fq\") pod \"manila-share-share0-0\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.354707 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.368862 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7812d9a-1081-4b52-9a7f-da420cf3aab9-etc-machine-id\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.369083 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7812d9a-1081-4b52-9a7f-da420cf3aab9-etc-machine-id\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.369101 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-scripts\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.369569 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.369728 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-config-data\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.369763 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7812d9a-1081-4b52-9a7f-da420cf3aab9-logs\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.369784 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-public-tls-certs\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.369807 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.369831 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-config-data-custom\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.369848 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvw84\" (UniqueName: \"kubernetes.io/projected/f7812d9a-1081-4b52-9a7f-da420cf3aab9-kube-api-access-bvw84\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.370399 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7812d9a-1081-4b52-9a7f-da420cf3aab9-logs\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.372883 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-public-tls-certs\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.373588 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-config-data\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.373884 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-scripts\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.375103 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.376772 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-config-data-custom\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.377466 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.389282 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvw84\" (UniqueName: \"kubernetes.io/projected/f7812d9a-1081-4b52-9a7f-da420cf3aab9-kube-api-access-bvw84\") pod \"manila-api-0\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.398572 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.558831 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.637992 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.736800 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:16:37 crc kubenswrapper[4899]: W0126 21:16:37.753792 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9a87185_1adc_4de2_8bd7_8eaac51ec303.slice/crio-0f4b11d1092f9a14aea571f14a049d0858d761c9fe32d6372e712248e43ea4c7 WatchSource:0}: Error finding container 0f4b11d1092f9a14aea571f14a049d0858d761c9fe32d6372e712248e43ea4c7: Status 404 returned error can't find the container with id 0f4b11d1092f9a14aea571f14a049d0858d761c9fe32d6372e712248e43ea4c7 Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.829661 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"a9a87185-1adc-4de2-8bd7-8eaac51ec303","Type":"ContainerStarted","Data":"0f4b11d1092f9a14aea571f14a049d0858d761c9fe32d6372e712248e43ea4c7"} Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.834946 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"6254479c-5ce9-4293-a79d-bd58887b2797","Type":"ContainerStarted","Data":"d59d19556e182ceda91568445c22cc43e7e843dcf57e786121c88a1a5f31c5f7"} Jan 26 21:16:37 crc kubenswrapper[4899]: I0126 21:16:37.860711 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:16:37 crc kubenswrapper[4899]: W0126 21:16:37.872414 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7812d9a_1081_4b52_9a7f_da420cf3aab9.slice/crio-a66d7e1259560828901ffdf3811903196485f2292d14c8042a9e59bf84609a9d WatchSource:0}: Error finding container a66d7e1259560828901ffdf3811903196485f2292d14c8042a9e59bf84609a9d: Status 404 returned error can't find the container with id a66d7e1259560828901ffdf3811903196485f2292d14c8042a9e59bf84609a9d Jan 26 21:16:38 crc kubenswrapper[4899]: I0126 21:16:38.864191 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"a9a87185-1adc-4de2-8bd7-8eaac51ec303","Type":"ContainerStarted","Data":"9ef31c56943278d5b6503fcebb848fa29141c457422b0439b16c986a37f1546d"} Jan 26 21:16:38 crc kubenswrapper[4899]: I0126 21:16:38.866299 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"a9a87185-1adc-4de2-8bd7-8eaac51ec303","Type":"ContainerStarted","Data":"ac30f6e3a619205e81d4b8c3947ca65d90480fc8d89d731c097dfcd44d727524"} Jan 26 21:16:38 crc kubenswrapper[4899]: I0126 21:16:38.866328 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"f7812d9a-1081-4b52-9a7f-da420cf3aab9","Type":"ContainerStarted","Data":"6c36f65c8eb70fadee4fd683baca19b9de42259977369d975d150906756d6470"} Jan 26 21:16:38 crc kubenswrapper[4899]: I0126 21:16:38.866410 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"f7812d9a-1081-4b52-9a7f-da420cf3aab9","Type":"ContainerStarted","Data":"a66d7e1259560828901ffdf3811903196485f2292d14c8042a9e59bf84609a9d"} Jan 26 21:16:38 crc kubenswrapper[4899]: I0126 21:16:38.868724 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"6254479c-5ce9-4293-a79d-bd58887b2797","Type":"ContainerStarted","Data":"4bd68d2b33b899a537e9f043d810b5b61e9169a063ae82353487214efbf9ea4b"} Jan 26 21:16:38 crc kubenswrapper[4899]: I0126 21:16:38.869203 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"6254479c-5ce9-4293-a79d-bd58887b2797","Type":"ContainerStarted","Data":"bd4d7f12bb849947ce471c29f46a3305769eee37c53fa88159a09206d6be1a65"} Jan 26 21:16:38 crc kubenswrapper[4899]: I0126 21:16:38.889675 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-share-share0-0" podStartSLOduration=1.889649421 podStartE2EDuration="1.889649421s" podCreationTimestamp="2026-01-26 21:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:16:38.888412856 +0000 UTC m=+1288.270000903" watchObservedRunningTime="2026-01-26 21:16:38.889649421 +0000 UTC m=+1288.271237458" Jan 26 21:16:38 crc kubenswrapper[4899]: I0126 21:16:38.915531 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-scheduler-0" podStartSLOduration=1.915506221 podStartE2EDuration="1.915506221s" podCreationTimestamp="2026-01-26 21:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:16:38.908280405 +0000 UTC m=+1288.289868432" watchObservedRunningTime="2026-01-26 21:16:38.915506221 +0000 UTC m=+1288.297094258" Jan 26 21:16:39 crc kubenswrapper[4899]: I0126 21:16:39.880360 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"f7812d9a-1081-4b52-9a7f-da420cf3aab9","Type":"ContainerStarted","Data":"df814cb9d2d39da5fb7262005fa5054b91692563a8282fa711f296f96b01464d"} Jan 26 21:16:39 crc kubenswrapper[4899]: I0126 21:16:39.880836 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:39 crc kubenswrapper[4899]: I0126 21:16:39.898712 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-api-0" podStartSLOduration=2.898687956 podStartE2EDuration="2.898687956s" podCreationTimestamp="2026-01-26 21:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:16:39.896816873 +0000 UTC m=+1289.278404900" watchObservedRunningTime="2026-01-26 21:16:39.898687956 +0000 UTC m=+1289.280275993" Jan 26 21:16:47 crc kubenswrapper[4899]: I0126 21:16:47.355029 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:47 crc kubenswrapper[4899]: I0126 21:16:47.399565 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:59 crc kubenswrapper[4899]: I0126 21:16:59.046072 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:16:59 crc kubenswrapper[4899]: I0126 21:16:59.224671 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:16:59 crc kubenswrapper[4899]: I0126 21:16:59.297337 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:16:59 crc kubenswrapper[4899]: I0126 21:16:59.792914 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-db-sync-mk995"] Jan 26 21:16:59 crc kubenswrapper[4899]: I0126 21:16:59.805360 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-db-sync-mk995"] Jan 26 21:16:59 crc kubenswrapper[4899]: I0126 21:16:59.835258 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:16:59 crc kubenswrapper[4899]: I0126 21:16:59.850039 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:16:59 crc kubenswrapper[4899]: I0126 21:16:59.888406 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila6447-account-delete-5h7gh"] Jan 26 21:16:59 crc kubenswrapper[4899]: I0126 21:16:59.889306 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila6447-account-delete-5h7gh" Jan 26 21:16:59 crc kubenswrapper[4899]: I0126 21:16:59.894879 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:16:59 crc kubenswrapper[4899]: I0126 21:16:59.898767 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgqxs\" (UniqueName: \"kubernetes.io/projected/a0034dce-d282-42f0-9cde-db3b4df6fe00-kube-api-access-lgqxs\") pod \"manila6447-account-delete-5h7gh\" (UID: \"a0034dce-d282-42f0-9cde-db3b4df6fe00\") " pod="manila-kuttl-tests/manila6447-account-delete-5h7gh" Jan 26 21:16:59 crc kubenswrapper[4899]: I0126 21:16:59.898836 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0034dce-d282-42f0-9cde-db3b4df6fe00-operator-scripts\") pod \"manila6447-account-delete-5h7gh\" (UID: \"a0034dce-d282-42f0-9cde-db3b4df6fe00\") " pod="manila-kuttl-tests/manila6447-account-delete-5h7gh" Jan 26 21:16:59 crc kubenswrapper[4899]: I0126 21:16:59.911434 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila6447-account-delete-5h7gh"] Jan 26 21:17:00 crc kubenswrapper[4899]: I0126 21:17:00.000358 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0034dce-d282-42f0-9cde-db3b4df6fe00-operator-scripts\") pod \"manila6447-account-delete-5h7gh\" (UID: \"a0034dce-d282-42f0-9cde-db3b4df6fe00\") " pod="manila-kuttl-tests/manila6447-account-delete-5h7gh" Jan 26 21:17:00 crc kubenswrapper[4899]: I0126 21:17:00.000496 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgqxs\" (UniqueName: \"kubernetes.io/projected/a0034dce-d282-42f0-9cde-db3b4df6fe00-kube-api-access-lgqxs\") pod \"manila6447-account-delete-5h7gh\" (UID: \"a0034dce-d282-42f0-9cde-db3b4df6fe00\") " pod="manila-kuttl-tests/manila6447-account-delete-5h7gh" Jan 26 21:17:00 crc kubenswrapper[4899]: I0126 21:17:00.001491 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0034dce-d282-42f0-9cde-db3b4df6fe00-operator-scripts\") pod \"manila6447-account-delete-5h7gh\" (UID: \"a0034dce-d282-42f0-9cde-db3b4df6fe00\") " pod="manila-kuttl-tests/manila6447-account-delete-5h7gh" Jan 26 21:17:00 crc kubenswrapper[4899]: I0126 21:17:00.019441 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgqxs\" (UniqueName: \"kubernetes.io/projected/a0034dce-d282-42f0-9cde-db3b4df6fe00-kube-api-access-lgqxs\") pod \"manila6447-account-delete-5h7gh\" (UID: \"a0034dce-d282-42f0-9cde-db3b4df6fe00\") " pod="manila-kuttl-tests/manila6447-account-delete-5h7gh" Jan 26 21:17:00 crc kubenswrapper[4899]: I0126 21:17:00.030132 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-scheduler-0" podUID="6254479c-5ce9-4293-a79d-bd58887b2797" containerName="manila-scheduler" containerID="cri-o://bd4d7f12bb849947ce471c29f46a3305769eee37c53fa88159a09206d6be1a65" gracePeriod=30 Jan 26 21:17:00 crc kubenswrapper[4899]: I0126 21:17:00.030351 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-share-share0-0" podUID="a9a87185-1adc-4de2-8bd7-8eaac51ec303" containerName="manila-share" containerID="cri-o://ac30f6e3a619205e81d4b8c3947ca65d90480fc8d89d731c097dfcd44d727524" gracePeriod=30 Jan 26 21:17:00 crc kubenswrapper[4899]: I0126 21:17:00.030511 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-api-0" podUID="f7812d9a-1081-4b52-9a7f-da420cf3aab9" containerName="manila-api-log" containerID="cri-o://6c36f65c8eb70fadee4fd683baca19b9de42259977369d975d150906756d6470" gracePeriod=30 Jan 26 21:17:00 crc kubenswrapper[4899]: I0126 21:17:00.030866 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-scheduler-0" podUID="6254479c-5ce9-4293-a79d-bd58887b2797" containerName="probe" containerID="cri-o://4bd68d2b33b899a537e9f043d810b5b61e9169a063ae82353487214efbf9ea4b" gracePeriod=30 Jan 26 21:17:00 crc kubenswrapper[4899]: I0126 21:17:00.031112 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-share-share0-0" podUID="a9a87185-1adc-4de2-8bd7-8eaac51ec303" containerName="probe" containerID="cri-o://9ef31c56943278d5b6503fcebb848fa29141c457422b0439b16c986a37f1546d" gracePeriod=30 Jan 26 21:17:00 crc kubenswrapper[4899]: I0126 21:17:00.031175 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-api-0" podUID="f7812d9a-1081-4b52-9a7f-da420cf3aab9" containerName="manila-api" containerID="cri-o://df814cb9d2d39da5fb7262005fa5054b91692563a8282fa711f296f96b01464d" gracePeriod=30 Jan 26 21:17:00 crc kubenswrapper[4899]: I0126 21:17:00.109407 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:17:00 crc kubenswrapper[4899]: I0126 21:17:00.109467 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:17:00 crc kubenswrapper[4899]: I0126 21:17:00.214129 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila6447-account-delete-5h7gh" Jan 26 21:17:00 crc kubenswrapper[4899]: I0126 21:17:00.762054 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila6447-account-delete-5h7gh"] Jan 26 21:17:00 crc kubenswrapper[4899]: W0126 21:17:00.768705 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0034dce_d282_42f0_9cde_db3b4df6fe00.slice/crio-9c2b296843beb4ff1846f53bb5254c4e949526d9506eb4a14e391f40c708134a WatchSource:0}: Error finding container 9c2b296843beb4ff1846f53bb5254c4e949526d9506eb4a14e391f40c708134a: Status 404 returned error can't find the container with id 9c2b296843beb4ff1846f53bb5254c4e949526d9506eb4a14e391f40c708134a Jan 26 21:17:00 crc kubenswrapper[4899]: I0126 21:17:00.938958 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b5cd91f-692c-44ef-84db-f1b54333c162" path="/var/lib/kubelet/pods/4b5cd91f-692c-44ef-84db-f1b54333c162/volumes" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.040484 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila6447-account-delete-5h7gh" event={"ID":"a0034dce-d282-42f0-9cde-db3b4df6fe00","Type":"ContainerStarted","Data":"57c890e0b20b53bfa4030a0e7538ebfe9d7be9b74e610ce103dc42d5a2822a99"} Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.040541 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila6447-account-delete-5h7gh" event={"ID":"a0034dce-d282-42f0-9cde-db3b4df6fe00","Type":"ContainerStarted","Data":"9c2b296843beb4ff1846f53bb5254c4e949526d9506eb4a14e391f40c708134a"} Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.042724 4899 generic.go:334] "Generic (PLEG): container finished" podID="a9a87185-1adc-4de2-8bd7-8eaac51ec303" containerID="9ef31c56943278d5b6503fcebb848fa29141c457422b0439b16c986a37f1546d" exitCode=0 Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.042760 4899 generic.go:334] "Generic (PLEG): container finished" podID="a9a87185-1adc-4de2-8bd7-8eaac51ec303" containerID="ac30f6e3a619205e81d4b8c3947ca65d90480fc8d89d731c097dfcd44d727524" exitCode=1 Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.042816 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"a9a87185-1adc-4de2-8bd7-8eaac51ec303","Type":"ContainerDied","Data":"9ef31c56943278d5b6503fcebb848fa29141c457422b0439b16c986a37f1546d"} Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.042855 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"a9a87185-1adc-4de2-8bd7-8eaac51ec303","Type":"ContainerDied","Data":"ac30f6e3a619205e81d4b8c3947ca65d90480fc8d89d731c097dfcd44d727524"} Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.044807 4899 generic.go:334] "Generic (PLEG): container finished" podID="f7812d9a-1081-4b52-9a7f-da420cf3aab9" containerID="6c36f65c8eb70fadee4fd683baca19b9de42259977369d975d150906756d6470" exitCode=143 Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.044878 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"f7812d9a-1081-4b52-9a7f-da420cf3aab9","Type":"ContainerDied","Data":"6c36f65c8eb70fadee4fd683baca19b9de42259977369d975d150906756d6470"} Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.052083 4899 generic.go:334] "Generic (PLEG): container finished" podID="6254479c-5ce9-4293-a79d-bd58887b2797" containerID="4bd68d2b33b899a537e9f043d810b5b61e9169a063ae82353487214efbf9ea4b" exitCode=0 Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.052115 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"6254479c-5ce9-4293-a79d-bd58887b2797","Type":"ContainerDied","Data":"4bd68d2b33b899a537e9f043d810b5b61e9169a063ae82353487214efbf9ea4b"} Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.055914 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila6447-account-delete-5h7gh" podStartSLOduration=2.055863965 podStartE2EDuration="2.055863965s" podCreationTimestamp="2026-01-26 21:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:17:01.05356438 +0000 UTC m=+1310.435152417" watchObservedRunningTime="2026-01-26 21:17:01.055863965 +0000 UTC m=+1310.437452002" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.596275 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.725386 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-scripts\") pod \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.725465 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-config-data-custom\") pod \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.725510 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a9a87185-1adc-4de2-8bd7-8eaac51ec303-var-lib-manila\") pod \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.725536 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-combined-ca-bundle\") pod \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.725589 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-config-data\") pod \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.725664 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-ceph\") pod \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.725731 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9a87185-1adc-4de2-8bd7-8eaac51ec303-etc-machine-id\") pod \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.725732 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a87185-1adc-4de2-8bd7-8eaac51ec303-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "a9a87185-1adc-4de2-8bd7-8eaac51ec303" (UID: "a9a87185-1adc-4de2-8bd7-8eaac51ec303"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.725813 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp5fq\" (UniqueName: \"kubernetes.io/projected/a9a87185-1adc-4de2-8bd7-8eaac51ec303-kube-api-access-mp5fq\") pod \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\" (UID: \"a9a87185-1adc-4de2-8bd7-8eaac51ec303\") " Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.726190 4899 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a9a87185-1adc-4de2-8bd7-8eaac51ec303-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.726501 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a87185-1adc-4de2-8bd7-8eaac51ec303-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a9a87185-1adc-4de2-8bd7-8eaac51ec303" (UID: "a9a87185-1adc-4de2-8bd7-8eaac51ec303"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.731101 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-scripts" (OuterVolumeSpecName: "scripts") pod "a9a87185-1adc-4de2-8bd7-8eaac51ec303" (UID: "a9a87185-1adc-4de2-8bd7-8eaac51ec303"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.743787 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9a87185-1adc-4de2-8bd7-8eaac51ec303-kube-api-access-mp5fq" (OuterVolumeSpecName: "kube-api-access-mp5fq") pod "a9a87185-1adc-4de2-8bd7-8eaac51ec303" (UID: "a9a87185-1adc-4de2-8bd7-8eaac51ec303"). InnerVolumeSpecName "kube-api-access-mp5fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.745747 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-ceph" (OuterVolumeSpecName: "ceph") pod "a9a87185-1adc-4de2-8bd7-8eaac51ec303" (UID: "a9a87185-1adc-4de2-8bd7-8eaac51ec303"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.745782 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a9a87185-1adc-4de2-8bd7-8eaac51ec303" (UID: "a9a87185-1adc-4de2-8bd7-8eaac51ec303"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.775657 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9a87185-1adc-4de2-8bd7-8eaac51ec303" (UID: "a9a87185-1adc-4de2-8bd7-8eaac51ec303"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.810836 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-config-data" (OuterVolumeSpecName: "config-data") pod "a9a87185-1adc-4de2-8bd7-8eaac51ec303" (UID: "a9a87185-1adc-4de2-8bd7-8eaac51ec303"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.827262 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp5fq\" (UniqueName: \"kubernetes.io/projected/a9a87185-1adc-4de2-8bd7-8eaac51ec303-kube-api-access-mp5fq\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.827301 4899 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.827317 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.827330 4899 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.827343 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.827356 4899 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9a87185-1adc-4de2-8bd7-8eaac51ec303-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:01 crc kubenswrapper[4899]: I0126 21:17:01.827368 4899 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9a87185-1adc-4de2-8bd7-8eaac51ec303-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:02 crc kubenswrapper[4899]: I0126 21:17:02.065734 4899 generic.go:334] "Generic (PLEG): container finished" podID="a0034dce-d282-42f0-9cde-db3b4df6fe00" containerID="57c890e0b20b53bfa4030a0e7538ebfe9d7be9b74e610ce103dc42d5a2822a99" exitCode=0 Jan 26 21:17:02 crc kubenswrapper[4899]: I0126 21:17:02.065814 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila6447-account-delete-5h7gh" event={"ID":"a0034dce-d282-42f0-9cde-db3b4df6fe00","Type":"ContainerDied","Data":"57c890e0b20b53bfa4030a0e7538ebfe9d7be9b74e610ce103dc42d5a2822a99"} Jan 26 21:17:02 crc kubenswrapper[4899]: I0126 21:17:02.067803 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"a9a87185-1adc-4de2-8bd7-8eaac51ec303","Type":"ContainerDied","Data":"0f4b11d1092f9a14aea571f14a049d0858d761c9fe32d6372e712248e43ea4c7"} Jan 26 21:17:02 crc kubenswrapper[4899]: I0126 21:17:02.067874 4899 scope.go:117] "RemoveContainer" containerID="9ef31c56943278d5b6503fcebb848fa29141c457422b0439b16c986a37f1546d" Jan 26 21:17:02 crc kubenswrapper[4899]: I0126 21:17:02.068080 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:02 crc kubenswrapper[4899]: I0126 21:17:02.085996 4899 scope.go:117] "RemoveContainer" containerID="ac30f6e3a619205e81d4b8c3947ca65d90480fc8d89d731c097dfcd44d727524" Jan 26 21:17:02 crc kubenswrapper[4899]: I0126 21:17:02.106617 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:17:02 crc kubenswrapper[4899]: I0126 21:17:02.112215 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:17:02 crc kubenswrapper[4899]: I0126 21:17:02.939879 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a87185-1adc-4de2-8bd7-8eaac51ec303" path="/var/lib/kubelet/pods/a9a87185-1adc-4de2-8bd7-8eaac51ec303/volumes" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.357702 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila6447-account-delete-5h7gh" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.541298 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.553408 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0034dce-d282-42f0-9cde-db3b4df6fe00-operator-scripts\") pod \"a0034dce-d282-42f0-9cde-db3b4df6fe00\" (UID: \"a0034dce-d282-42f0-9cde-db3b4df6fe00\") " Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.553454 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgqxs\" (UniqueName: \"kubernetes.io/projected/a0034dce-d282-42f0-9cde-db3b4df6fe00-kube-api-access-lgqxs\") pod \"a0034dce-d282-42f0-9cde-db3b4df6fe00\" (UID: \"a0034dce-d282-42f0-9cde-db3b4df6fe00\") " Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.555142 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0034dce-d282-42f0-9cde-db3b4df6fe00-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a0034dce-d282-42f0-9cde-db3b4df6fe00" (UID: "a0034dce-d282-42f0-9cde-db3b4df6fe00"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.559009 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0034dce-d282-42f0-9cde-db3b4df6fe00-kube-api-access-lgqxs" (OuterVolumeSpecName: "kube-api-access-lgqxs") pod "a0034dce-d282-42f0-9cde-db3b4df6fe00" (UID: "a0034dce-d282-42f0-9cde-db3b4df6fe00"). InnerVolumeSpecName "kube-api-access-lgqxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.655178 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-public-tls-certs\") pod \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.655246 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7812d9a-1081-4b52-9a7f-da420cf3aab9-etc-machine-id\") pod \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.655279 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-config-data\") pod \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.655356 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-config-data-custom\") pod \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.655365 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7812d9a-1081-4b52-9a7f-da420cf3aab9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f7812d9a-1081-4b52-9a7f-da420cf3aab9" (UID: "f7812d9a-1081-4b52-9a7f-da420cf3aab9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.655397 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7812d9a-1081-4b52-9a7f-da420cf3aab9-logs\") pod \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.655440 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvw84\" (UniqueName: \"kubernetes.io/projected/f7812d9a-1081-4b52-9a7f-da420cf3aab9-kube-api-access-bvw84\") pod \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.655485 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-scripts\") pod \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.655527 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-internal-tls-certs\") pod \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.655577 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-combined-ca-bundle\") pod \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\" (UID: \"f7812d9a-1081-4b52-9a7f-da420cf3aab9\") " Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.655907 4899 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0034dce-d282-42f0-9cde-db3b4df6fe00-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.655956 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgqxs\" (UniqueName: \"kubernetes.io/projected/a0034dce-d282-42f0-9cde-db3b4df6fe00-kube-api-access-lgqxs\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.655971 4899 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7812d9a-1081-4b52-9a7f-da420cf3aab9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.656550 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7812d9a-1081-4b52-9a7f-da420cf3aab9-logs" (OuterVolumeSpecName: "logs") pod "f7812d9a-1081-4b52-9a7f-da420cf3aab9" (UID: "f7812d9a-1081-4b52-9a7f-da420cf3aab9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.658749 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f7812d9a-1081-4b52-9a7f-da420cf3aab9" (UID: "f7812d9a-1081-4b52-9a7f-da420cf3aab9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.659029 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7812d9a-1081-4b52-9a7f-da420cf3aab9-kube-api-access-bvw84" (OuterVolumeSpecName: "kube-api-access-bvw84") pod "f7812d9a-1081-4b52-9a7f-da420cf3aab9" (UID: "f7812d9a-1081-4b52-9a7f-da420cf3aab9"). InnerVolumeSpecName "kube-api-access-bvw84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.660014 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-scripts" (OuterVolumeSpecName: "scripts") pod "f7812d9a-1081-4b52-9a7f-da420cf3aab9" (UID: "f7812d9a-1081-4b52-9a7f-da420cf3aab9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.675754 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7812d9a-1081-4b52-9a7f-da420cf3aab9" (UID: "f7812d9a-1081-4b52-9a7f-da420cf3aab9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.683527 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-config-data" (OuterVolumeSpecName: "config-data") pod "f7812d9a-1081-4b52-9a7f-da420cf3aab9" (UID: "f7812d9a-1081-4b52-9a7f-da420cf3aab9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.686072 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f7812d9a-1081-4b52-9a7f-da420cf3aab9" (UID: "f7812d9a-1081-4b52-9a7f-da420cf3aab9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.687759 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f7812d9a-1081-4b52-9a7f-da420cf3aab9" (UID: "f7812d9a-1081-4b52-9a7f-da420cf3aab9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.757613 4899 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.757881 4899 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.757958 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.758025 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.758086 4899 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7812d9a-1081-4b52-9a7f-da420cf3aab9-logs\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.758168 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvw84\" (UniqueName: \"kubernetes.io/projected/f7812d9a-1081-4b52-9a7f-da420cf3aab9-kube-api-access-bvw84\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.758233 4899 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:03 crc kubenswrapper[4899]: I0126 21:17:03.758295 4899 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7812d9a-1081-4b52-9a7f-da420cf3aab9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.085964 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila6447-account-delete-5h7gh" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.085974 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila6447-account-delete-5h7gh" event={"ID":"a0034dce-d282-42f0-9cde-db3b4df6fe00","Type":"ContainerDied","Data":"9c2b296843beb4ff1846f53bb5254c4e949526d9506eb4a14e391f40c708134a"} Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.086036 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c2b296843beb4ff1846f53bb5254c4e949526d9506eb4a14e391f40c708134a" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.088094 4899 generic.go:334] "Generic (PLEG): container finished" podID="f7812d9a-1081-4b52-9a7f-da420cf3aab9" containerID="df814cb9d2d39da5fb7262005fa5054b91692563a8282fa711f296f96b01464d" exitCode=0 Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.088168 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"f7812d9a-1081-4b52-9a7f-da420cf3aab9","Type":"ContainerDied","Data":"df814cb9d2d39da5fb7262005fa5054b91692563a8282fa711f296f96b01464d"} Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.088305 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"f7812d9a-1081-4b52-9a7f-da420cf3aab9","Type":"ContainerDied","Data":"a66d7e1259560828901ffdf3811903196485f2292d14c8042a9e59bf84609a9d"} Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.088372 4899 scope.go:117] "RemoveContainer" containerID="df814cb9d2d39da5fb7262005fa5054b91692563a8282fa711f296f96b01464d" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.088225 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.121384 4899 scope.go:117] "RemoveContainer" containerID="6c36f65c8eb70fadee4fd683baca19b9de42259977369d975d150906756d6470" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.137856 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.144659 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.155995 4899 scope.go:117] "RemoveContainer" containerID="df814cb9d2d39da5fb7262005fa5054b91692563a8282fa711f296f96b01464d" Jan 26 21:17:04 crc kubenswrapper[4899]: E0126 21:17:04.156532 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df814cb9d2d39da5fb7262005fa5054b91692563a8282fa711f296f96b01464d\": container with ID starting with df814cb9d2d39da5fb7262005fa5054b91692563a8282fa711f296f96b01464d not found: ID does not exist" containerID="df814cb9d2d39da5fb7262005fa5054b91692563a8282fa711f296f96b01464d" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.156573 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df814cb9d2d39da5fb7262005fa5054b91692563a8282fa711f296f96b01464d"} err="failed to get container status \"df814cb9d2d39da5fb7262005fa5054b91692563a8282fa711f296f96b01464d\": rpc error: code = NotFound desc = could not find container \"df814cb9d2d39da5fb7262005fa5054b91692563a8282fa711f296f96b01464d\": container with ID starting with df814cb9d2d39da5fb7262005fa5054b91692563a8282fa711f296f96b01464d not found: ID does not exist" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.156605 4899 scope.go:117] "RemoveContainer" containerID="6c36f65c8eb70fadee4fd683baca19b9de42259977369d975d150906756d6470" Jan 26 21:17:04 crc kubenswrapper[4899]: E0126 21:17:04.156827 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c36f65c8eb70fadee4fd683baca19b9de42259977369d975d150906756d6470\": container with ID starting with 6c36f65c8eb70fadee4fd683baca19b9de42259977369d975d150906756d6470 not found: ID does not exist" containerID="6c36f65c8eb70fadee4fd683baca19b9de42259977369d975d150906756d6470" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.156847 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c36f65c8eb70fadee4fd683baca19b9de42259977369d975d150906756d6470"} err="failed to get container status \"6c36f65c8eb70fadee4fd683baca19b9de42259977369d975d150906756d6470\": rpc error: code = NotFound desc = could not find container \"6c36f65c8eb70fadee4fd683baca19b9de42259977369d975d150906756d6470\": container with ID starting with 6c36f65c8eb70fadee4fd683baca19b9de42259977369d975d150906756d6470 not found: ID does not exist" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.827802 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.898132 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-db-create-tdsg2"] Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.905354 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-db-create-tdsg2"] Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.909713 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-6447-account-create-update-8fknt"] Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.915855 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila6447-account-delete-5h7gh"] Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.922918 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-6447-account-create-update-8fknt"] Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.928573 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila6447-account-delete-5h7gh"] Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.939783 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a996441-754f-4281-91cb-92d4a14f9cb3" path="/var/lib/kubelet/pods/0a996441-754f-4281-91cb-92d4a14f9cb3/volumes" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.940457 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0034dce-d282-42f0-9cde-db3b4df6fe00" path="/var/lib/kubelet/pods/a0034dce-d282-42f0-9cde-db3b4df6fe00/volumes" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.941012 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df2a8fab-05f9-4f6d-adeb-184819d687d9" path="/var/lib/kubelet/pods/df2a8fab-05f9-4f6d-adeb-184819d687d9/volumes" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.942038 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7812d9a-1081-4b52-9a7f-da420cf3aab9" path="/var/lib/kubelet/pods/f7812d9a-1081-4b52-9a7f-da420cf3aab9/volumes" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.980156 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-config-data-custom\") pod \"6254479c-5ce9-4293-a79d-bd58887b2797\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.980252 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-combined-ca-bundle\") pod \"6254479c-5ce9-4293-a79d-bd58887b2797\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.980316 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-config-data\") pod \"6254479c-5ce9-4293-a79d-bd58887b2797\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.980384 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfkjb\" (UniqueName: \"kubernetes.io/projected/6254479c-5ce9-4293-a79d-bd58887b2797-kube-api-access-lfkjb\") pod \"6254479c-5ce9-4293-a79d-bd58887b2797\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.981123 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6254479c-5ce9-4293-a79d-bd58887b2797-etc-machine-id\") pod \"6254479c-5ce9-4293-a79d-bd58887b2797\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.981214 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-scripts\") pod \"6254479c-5ce9-4293-a79d-bd58887b2797\" (UID: \"6254479c-5ce9-4293-a79d-bd58887b2797\") " Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.981280 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6254479c-5ce9-4293-a79d-bd58887b2797-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6254479c-5ce9-4293-a79d-bd58887b2797" (UID: "6254479c-5ce9-4293-a79d-bd58887b2797"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.981566 4899 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6254479c-5ce9-4293-a79d-bd58887b2797-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.984894 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6254479c-5ce9-4293-a79d-bd58887b2797-kube-api-access-lfkjb" (OuterVolumeSpecName: "kube-api-access-lfkjb") pod "6254479c-5ce9-4293-a79d-bd58887b2797" (UID: "6254479c-5ce9-4293-a79d-bd58887b2797"). InnerVolumeSpecName "kube-api-access-lfkjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.985517 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-scripts" (OuterVolumeSpecName: "scripts") pod "6254479c-5ce9-4293-a79d-bd58887b2797" (UID: "6254479c-5ce9-4293-a79d-bd58887b2797"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.995343 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-db-create-hnspt"] Jan 26 21:17:04 crc kubenswrapper[4899]: E0126 21:17:04.995663 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7812d9a-1081-4b52-9a7f-da420cf3aab9" containerName="manila-api-log" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.995676 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7812d9a-1081-4b52-9a7f-da420cf3aab9" containerName="manila-api-log" Jan 26 21:17:04 crc kubenswrapper[4899]: E0126 21:17:04.995689 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a87185-1adc-4de2-8bd7-8eaac51ec303" containerName="manila-share" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.995697 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a87185-1adc-4de2-8bd7-8eaac51ec303" containerName="manila-share" Jan 26 21:17:04 crc kubenswrapper[4899]: E0126 21:17:04.995706 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0034dce-d282-42f0-9cde-db3b4df6fe00" containerName="mariadb-account-delete" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.995715 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0034dce-d282-42f0-9cde-db3b4df6fe00" containerName="mariadb-account-delete" Jan 26 21:17:04 crc kubenswrapper[4899]: E0126 21:17:04.995722 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a87185-1adc-4de2-8bd7-8eaac51ec303" containerName="probe" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.995728 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a87185-1adc-4de2-8bd7-8eaac51ec303" containerName="probe" Jan 26 21:17:04 crc kubenswrapper[4899]: E0126 21:17:04.995739 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6254479c-5ce9-4293-a79d-bd58887b2797" containerName="manila-scheduler" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.995745 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="6254479c-5ce9-4293-a79d-bd58887b2797" containerName="manila-scheduler" Jan 26 21:17:04 crc kubenswrapper[4899]: E0126 21:17:04.995758 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7812d9a-1081-4b52-9a7f-da420cf3aab9" containerName="manila-api" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.995763 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7812d9a-1081-4b52-9a7f-da420cf3aab9" containerName="manila-api" Jan 26 21:17:04 crc kubenswrapper[4899]: E0126 21:17:04.995782 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6254479c-5ce9-4293-a79d-bd58887b2797" containerName="probe" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.995788 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="6254479c-5ce9-4293-a79d-bd58887b2797" containerName="probe" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.995936 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="6254479c-5ce9-4293-a79d-bd58887b2797" containerName="manila-scheduler" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.995952 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a87185-1adc-4de2-8bd7-8eaac51ec303" containerName="manila-share" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.995961 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a87185-1adc-4de2-8bd7-8eaac51ec303" containerName="probe" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.995969 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7812d9a-1081-4b52-9a7f-da420cf3aab9" containerName="manila-api" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.995978 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0034dce-d282-42f0-9cde-db3b4df6fe00" containerName="mariadb-account-delete" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.995987 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="6254479c-5ce9-4293-a79d-bd58887b2797" containerName="probe" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.995994 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7812d9a-1081-4b52-9a7f-da420cf3aab9" containerName="manila-api-log" Jan 26 21:17:04 crc kubenswrapper[4899]: I0126 21:17:04.996480 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-create-hnspt" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.005281 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-db-create-hnspt"] Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.050237 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6254479c-5ce9-4293-a79d-bd58887b2797" (UID: "6254479c-5ce9-4293-a79d-bd58887b2797"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.067000 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6254479c-5ce9-4293-a79d-bd58887b2797" (UID: "6254479c-5ce9-4293-a79d-bd58887b2797"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.082817 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07ac1e30-81c7-4490-b3fc-31ca412bc46b-operator-scripts\") pod \"manila-db-create-hnspt\" (UID: \"07ac1e30-81c7-4490-b3fc-31ca412bc46b\") " pod="manila-kuttl-tests/manila-db-create-hnspt" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.082982 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45tm9\" (UniqueName: \"kubernetes.io/projected/07ac1e30-81c7-4490-b3fc-31ca412bc46b-kube-api-access-45tm9\") pod \"manila-db-create-hnspt\" (UID: \"07ac1e30-81c7-4490-b3fc-31ca412bc46b\") " pod="manila-kuttl-tests/manila-db-create-hnspt" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.083045 4899 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.083066 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.083080 4899 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.083091 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfkjb\" (UniqueName: \"kubernetes.io/projected/6254479c-5ce9-4293-a79d-bd58887b2797-kube-api-access-lfkjb\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.117449 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-4115-account-create-update-kj9qp"] Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.119088 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-4115-account-create-update-kj9qp" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.121155 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-db-secret" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.121539 4899 generic.go:334] "Generic (PLEG): container finished" podID="6254479c-5ce9-4293-a79d-bd58887b2797" containerID="bd4d7f12bb849947ce471c29f46a3305769eee37c53fa88159a09206d6be1a65" exitCode=0 Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.121582 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"6254479c-5ce9-4293-a79d-bd58887b2797","Type":"ContainerDied","Data":"bd4d7f12bb849947ce471c29f46a3305769eee37c53fa88159a09206d6be1a65"} Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.121611 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"6254479c-5ce9-4293-a79d-bd58887b2797","Type":"ContainerDied","Data":"d59d19556e182ceda91568445c22cc43e7e843dcf57e786121c88a1a5f31c5f7"} Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.121630 4899 scope.go:117] "RemoveContainer" containerID="4bd68d2b33b899a537e9f043d810b5b61e9169a063ae82353487214efbf9ea4b" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.121743 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.122027 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-config-data" (OuterVolumeSpecName: "config-data") pod "6254479c-5ce9-4293-a79d-bd58887b2797" (UID: "6254479c-5ce9-4293-a79d-bd58887b2797"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.136959 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-4115-account-create-update-kj9qp"] Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.141147 4899 scope.go:117] "RemoveContainer" containerID="bd4d7f12bb849947ce471c29f46a3305769eee37c53fa88159a09206d6be1a65" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.157799 4899 scope.go:117] "RemoveContainer" containerID="4bd68d2b33b899a537e9f043d810b5b61e9169a063ae82353487214efbf9ea4b" Jan 26 21:17:05 crc kubenswrapper[4899]: E0126 21:17:05.158291 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bd68d2b33b899a537e9f043d810b5b61e9169a063ae82353487214efbf9ea4b\": container with ID starting with 4bd68d2b33b899a537e9f043d810b5b61e9169a063ae82353487214efbf9ea4b not found: ID does not exist" containerID="4bd68d2b33b899a537e9f043d810b5b61e9169a063ae82353487214efbf9ea4b" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.158338 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bd68d2b33b899a537e9f043d810b5b61e9169a063ae82353487214efbf9ea4b"} err="failed to get container status \"4bd68d2b33b899a537e9f043d810b5b61e9169a063ae82353487214efbf9ea4b\": rpc error: code = NotFound desc = could not find container \"4bd68d2b33b899a537e9f043d810b5b61e9169a063ae82353487214efbf9ea4b\": container with ID starting with 4bd68d2b33b899a537e9f043d810b5b61e9169a063ae82353487214efbf9ea4b not found: ID does not exist" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.158369 4899 scope.go:117] "RemoveContainer" containerID="bd4d7f12bb849947ce471c29f46a3305769eee37c53fa88159a09206d6be1a65" Jan 26 21:17:05 crc kubenswrapper[4899]: E0126 21:17:05.158673 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd4d7f12bb849947ce471c29f46a3305769eee37c53fa88159a09206d6be1a65\": container with ID starting with bd4d7f12bb849947ce471c29f46a3305769eee37c53fa88159a09206d6be1a65 not found: ID does not exist" containerID="bd4d7f12bb849947ce471c29f46a3305769eee37c53fa88159a09206d6be1a65" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.158705 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd4d7f12bb849947ce471c29f46a3305769eee37c53fa88159a09206d6be1a65"} err="failed to get container status \"bd4d7f12bb849947ce471c29f46a3305769eee37c53fa88159a09206d6be1a65\": rpc error: code = NotFound desc = could not find container \"bd4d7f12bb849947ce471c29f46a3305769eee37c53fa88159a09206d6be1a65\": container with ID starting with bd4d7f12bb849947ce471c29f46a3305769eee37c53fa88159a09206d6be1a65 not found: ID does not exist" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.184091 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txrbr\" (UniqueName: \"kubernetes.io/projected/225d74b4-6432-4f35-bd14-abb53a5d2c46-kube-api-access-txrbr\") pod \"manila-4115-account-create-update-kj9qp\" (UID: \"225d74b4-6432-4f35-bd14-abb53a5d2c46\") " pod="manila-kuttl-tests/manila-4115-account-create-update-kj9qp" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.184137 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07ac1e30-81c7-4490-b3fc-31ca412bc46b-operator-scripts\") pod \"manila-db-create-hnspt\" (UID: \"07ac1e30-81c7-4490-b3fc-31ca412bc46b\") " pod="manila-kuttl-tests/manila-db-create-hnspt" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.184185 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/225d74b4-6432-4f35-bd14-abb53a5d2c46-operator-scripts\") pod \"manila-4115-account-create-update-kj9qp\" (UID: \"225d74b4-6432-4f35-bd14-abb53a5d2c46\") " pod="manila-kuttl-tests/manila-4115-account-create-update-kj9qp" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.184269 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45tm9\" (UniqueName: \"kubernetes.io/projected/07ac1e30-81c7-4490-b3fc-31ca412bc46b-kube-api-access-45tm9\") pod \"manila-db-create-hnspt\" (UID: \"07ac1e30-81c7-4490-b3fc-31ca412bc46b\") " pod="manila-kuttl-tests/manila-db-create-hnspt" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.184318 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6254479c-5ce9-4293-a79d-bd58887b2797-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.184993 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07ac1e30-81c7-4490-b3fc-31ca412bc46b-operator-scripts\") pod \"manila-db-create-hnspt\" (UID: \"07ac1e30-81c7-4490-b3fc-31ca412bc46b\") " pod="manila-kuttl-tests/manila-db-create-hnspt" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.200337 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45tm9\" (UniqueName: \"kubernetes.io/projected/07ac1e30-81c7-4490-b3fc-31ca412bc46b-kube-api-access-45tm9\") pod \"manila-db-create-hnspt\" (UID: \"07ac1e30-81c7-4490-b3fc-31ca412bc46b\") " pod="manila-kuttl-tests/manila-db-create-hnspt" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.285636 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txrbr\" (UniqueName: \"kubernetes.io/projected/225d74b4-6432-4f35-bd14-abb53a5d2c46-kube-api-access-txrbr\") pod \"manila-4115-account-create-update-kj9qp\" (UID: \"225d74b4-6432-4f35-bd14-abb53a5d2c46\") " pod="manila-kuttl-tests/manila-4115-account-create-update-kj9qp" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.285727 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/225d74b4-6432-4f35-bd14-abb53a5d2c46-operator-scripts\") pod \"manila-4115-account-create-update-kj9qp\" (UID: \"225d74b4-6432-4f35-bd14-abb53a5d2c46\") " pod="manila-kuttl-tests/manila-4115-account-create-update-kj9qp" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.286468 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/225d74b4-6432-4f35-bd14-abb53a5d2c46-operator-scripts\") pod \"manila-4115-account-create-update-kj9qp\" (UID: \"225d74b4-6432-4f35-bd14-abb53a5d2c46\") " pod="manila-kuttl-tests/manila-4115-account-create-update-kj9qp" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.303714 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txrbr\" (UniqueName: \"kubernetes.io/projected/225d74b4-6432-4f35-bd14-abb53a5d2c46-kube-api-access-txrbr\") pod \"manila-4115-account-create-update-kj9qp\" (UID: \"225d74b4-6432-4f35-bd14-abb53a5d2c46\") " pod="manila-kuttl-tests/manila-4115-account-create-update-kj9qp" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.384346 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-create-hnspt" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.438430 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-4115-account-create-update-kj9qp" Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.501236 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.506308 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.810739 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-db-create-hnspt"] Jan 26 21:17:05 crc kubenswrapper[4899]: W0126 21:17:05.813092 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07ac1e30_81c7_4490_b3fc_31ca412bc46b.slice/crio-3a58c81b7af67b69fd54f0c884a64ffce15eb5bf5dfab4dc18410e4504dadd5d WatchSource:0}: Error finding container 3a58c81b7af67b69fd54f0c884a64ffce15eb5bf5dfab4dc18410e4504dadd5d: Status 404 returned error can't find the container with id 3a58c81b7af67b69fd54f0c884a64ffce15eb5bf5dfab4dc18410e4504dadd5d Jan 26 21:17:05 crc kubenswrapper[4899]: I0126 21:17:05.880769 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-4115-account-create-update-kj9qp"] Jan 26 21:17:06 crc kubenswrapper[4899]: I0126 21:17:06.131548 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-4115-account-create-update-kj9qp" event={"ID":"225d74b4-6432-4f35-bd14-abb53a5d2c46","Type":"ContainerStarted","Data":"593bb6987ce9a00b2ed9419845f9cc492b60d858f9fc2e53d8e595a7bfad7f6a"} Jan 26 21:17:06 crc kubenswrapper[4899]: I0126 21:17:06.132633 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-4115-account-create-update-kj9qp" event={"ID":"225d74b4-6432-4f35-bd14-abb53a5d2c46","Type":"ContainerStarted","Data":"eb16dfc8e33cbd228835b65fc499272c27e07d11abc1361c7793ad25f6820305"} Jan 26 21:17:06 crc kubenswrapper[4899]: I0126 21:17:06.133299 4899 generic.go:334] "Generic (PLEG): container finished" podID="07ac1e30-81c7-4490-b3fc-31ca412bc46b" containerID="09e5df032b970d5f2796f97efa127b8697b61628dd66fe2585414d0578e97cde" exitCode=0 Jan 26 21:17:06 crc kubenswrapper[4899]: I0126 21:17:06.133341 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-create-hnspt" event={"ID":"07ac1e30-81c7-4490-b3fc-31ca412bc46b","Type":"ContainerDied","Data":"09e5df032b970d5f2796f97efa127b8697b61628dd66fe2585414d0578e97cde"} Jan 26 21:17:06 crc kubenswrapper[4899]: I0126 21:17:06.133389 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-create-hnspt" event={"ID":"07ac1e30-81c7-4490-b3fc-31ca412bc46b","Type":"ContainerStarted","Data":"3a58c81b7af67b69fd54f0c884a64ffce15eb5bf5dfab4dc18410e4504dadd5d"} Jan 26 21:17:06 crc kubenswrapper[4899]: I0126 21:17:06.149879 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-4115-account-create-update-kj9qp" podStartSLOduration=1.149853483 podStartE2EDuration="1.149853483s" podCreationTimestamp="2026-01-26 21:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:17:06.146560189 +0000 UTC m=+1315.528148226" watchObservedRunningTime="2026-01-26 21:17:06.149853483 +0000 UTC m=+1315.531441520" Jan 26 21:17:06 crc kubenswrapper[4899]: I0126 21:17:06.938795 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6254479c-5ce9-4293-a79d-bd58887b2797" path="/var/lib/kubelet/pods/6254479c-5ce9-4293-a79d-bd58887b2797/volumes" Jan 26 21:17:07 crc kubenswrapper[4899]: I0126 21:17:07.144145 4899 generic.go:334] "Generic (PLEG): container finished" podID="225d74b4-6432-4f35-bd14-abb53a5d2c46" containerID="593bb6987ce9a00b2ed9419845f9cc492b60d858f9fc2e53d8e595a7bfad7f6a" exitCode=0 Jan 26 21:17:07 crc kubenswrapper[4899]: I0126 21:17:07.144213 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-4115-account-create-update-kj9qp" event={"ID":"225d74b4-6432-4f35-bd14-abb53a5d2c46","Type":"ContainerDied","Data":"593bb6987ce9a00b2ed9419845f9cc492b60d858f9fc2e53d8e595a7bfad7f6a"} Jan 26 21:17:07 crc kubenswrapper[4899]: I0126 21:17:07.465681 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-create-hnspt" Jan 26 21:17:07 crc kubenswrapper[4899]: I0126 21:17:07.651090 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45tm9\" (UniqueName: \"kubernetes.io/projected/07ac1e30-81c7-4490-b3fc-31ca412bc46b-kube-api-access-45tm9\") pod \"07ac1e30-81c7-4490-b3fc-31ca412bc46b\" (UID: \"07ac1e30-81c7-4490-b3fc-31ca412bc46b\") " Jan 26 21:17:07 crc kubenswrapper[4899]: I0126 21:17:07.651282 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07ac1e30-81c7-4490-b3fc-31ca412bc46b-operator-scripts\") pod \"07ac1e30-81c7-4490-b3fc-31ca412bc46b\" (UID: \"07ac1e30-81c7-4490-b3fc-31ca412bc46b\") " Jan 26 21:17:07 crc kubenswrapper[4899]: I0126 21:17:07.652010 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ac1e30-81c7-4490-b3fc-31ca412bc46b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07ac1e30-81c7-4490-b3fc-31ca412bc46b" (UID: "07ac1e30-81c7-4490-b3fc-31ca412bc46b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:17:07 crc kubenswrapper[4899]: I0126 21:17:07.658102 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07ac1e30-81c7-4490-b3fc-31ca412bc46b-kube-api-access-45tm9" (OuterVolumeSpecName: "kube-api-access-45tm9") pod "07ac1e30-81c7-4490-b3fc-31ca412bc46b" (UID: "07ac1e30-81c7-4490-b3fc-31ca412bc46b"). InnerVolumeSpecName "kube-api-access-45tm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:17:07 crc kubenswrapper[4899]: I0126 21:17:07.753175 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45tm9\" (UniqueName: \"kubernetes.io/projected/07ac1e30-81c7-4490-b3fc-31ca412bc46b-kube-api-access-45tm9\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:07 crc kubenswrapper[4899]: I0126 21:17:07.753556 4899 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07ac1e30-81c7-4490-b3fc-31ca412bc46b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:08 crc kubenswrapper[4899]: I0126 21:17:08.156538 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-create-hnspt" Jan 26 21:17:08 crc kubenswrapper[4899]: I0126 21:17:08.156543 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-create-hnspt" event={"ID":"07ac1e30-81c7-4490-b3fc-31ca412bc46b","Type":"ContainerDied","Data":"3a58c81b7af67b69fd54f0c884a64ffce15eb5bf5dfab4dc18410e4504dadd5d"} Jan 26 21:17:08 crc kubenswrapper[4899]: I0126 21:17:08.156607 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a58c81b7af67b69fd54f0c884a64ffce15eb5bf5dfab4dc18410e4504dadd5d" Jan 26 21:17:08 crc kubenswrapper[4899]: I0126 21:17:08.489905 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-4115-account-create-update-kj9qp" Jan 26 21:17:08 crc kubenswrapper[4899]: I0126 21:17:08.665453 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/225d74b4-6432-4f35-bd14-abb53a5d2c46-operator-scripts\") pod \"225d74b4-6432-4f35-bd14-abb53a5d2c46\" (UID: \"225d74b4-6432-4f35-bd14-abb53a5d2c46\") " Jan 26 21:17:08 crc kubenswrapper[4899]: I0126 21:17:08.665515 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txrbr\" (UniqueName: \"kubernetes.io/projected/225d74b4-6432-4f35-bd14-abb53a5d2c46-kube-api-access-txrbr\") pod \"225d74b4-6432-4f35-bd14-abb53a5d2c46\" (UID: \"225d74b4-6432-4f35-bd14-abb53a5d2c46\") " Jan 26 21:17:08 crc kubenswrapper[4899]: I0126 21:17:08.666505 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/225d74b4-6432-4f35-bd14-abb53a5d2c46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "225d74b4-6432-4f35-bd14-abb53a5d2c46" (UID: "225d74b4-6432-4f35-bd14-abb53a5d2c46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:17:08 crc kubenswrapper[4899]: I0126 21:17:08.671247 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/225d74b4-6432-4f35-bd14-abb53a5d2c46-kube-api-access-txrbr" (OuterVolumeSpecName: "kube-api-access-txrbr") pod "225d74b4-6432-4f35-bd14-abb53a5d2c46" (UID: "225d74b4-6432-4f35-bd14-abb53a5d2c46"). InnerVolumeSpecName "kube-api-access-txrbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:17:08 crc kubenswrapper[4899]: I0126 21:17:08.767116 4899 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/225d74b4-6432-4f35-bd14-abb53a5d2c46-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:08 crc kubenswrapper[4899]: I0126 21:17:08.767693 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txrbr\" (UniqueName: \"kubernetes.io/projected/225d74b4-6432-4f35-bd14-abb53a5d2c46-kube-api-access-txrbr\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:09 crc kubenswrapper[4899]: I0126 21:17:09.165402 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-4115-account-create-update-kj9qp" event={"ID":"225d74b4-6432-4f35-bd14-abb53a5d2c46","Type":"ContainerDied","Data":"eb16dfc8e33cbd228835b65fc499272c27e07d11abc1361c7793ad25f6820305"} Jan 26 21:17:09 crc kubenswrapper[4899]: I0126 21:17:09.165450 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb16dfc8e33cbd228835b65fc499272c27e07d11abc1361c7793ad25f6820305" Jan 26 21:17:09 crc kubenswrapper[4899]: I0126 21:17:09.165455 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-4115-account-create-update-kj9qp" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.342536 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-db-sync-h5bsm"] Jan 26 21:17:10 crc kubenswrapper[4899]: E0126 21:17:10.343234 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="225d74b4-6432-4f35-bd14-abb53a5d2c46" containerName="mariadb-account-create-update" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.343251 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="225d74b4-6432-4f35-bd14-abb53a5d2c46" containerName="mariadb-account-create-update" Jan 26 21:17:10 crc kubenswrapper[4899]: E0126 21:17:10.343276 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ac1e30-81c7-4490-b3fc-31ca412bc46b" containerName="mariadb-database-create" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.343284 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ac1e30-81c7-4490-b3fc-31ca412bc46b" containerName="mariadb-database-create" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.343439 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="225d74b4-6432-4f35-bd14-abb53a5d2c46" containerName="mariadb-account-create-update" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.343456 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="07ac1e30-81c7-4490-b3fc-31ca412bc46b" containerName="mariadb-database-create" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.344007 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-sync-h5bsm" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.346254 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-config-data" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.346392 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-manila-dockercfg-bbwjx" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.361482 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-db-sync-h5bsm"] Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.491750 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-config-data\") pod \"manila-db-sync-h5bsm\" (UID: \"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea\") " pod="manila-kuttl-tests/manila-db-sync-h5bsm" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.491802 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjds8\" (UniqueName: \"kubernetes.io/projected/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-kube-api-access-fjds8\") pod \"manila-db-sync-h5bsm\" (UID: \"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea\") " pod="manila-kuttl-tests/manila-db-sync-h5bsm" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.491828 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-job-config-data\") pod \"manila-db-sync-h5bsm\" (UID: \"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea\") " pod="manila-kuttl-tests/manila-db-sync-h5bsm" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.593324 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-config-data\") pod \"manila-db-sync-h5bsm\" (UID: \"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea\") " pod="manila-kuttl-tests/manila-db-sync-h5bsm" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.593379 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjds8\" (UniqueName: \"kubernetes.io/projected/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-kube-api-access-fjds8\") pod \"manila-db-sync-h5bsm\" (UID: \"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea\") " pod="manila-kuttl-tests/manila-db-sync-h5bsm" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.593403 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-job-config-data\") pod \"manila-db-sync-h5bsm\" (UID: \"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea\") " pod="manila-kuttl-tests/manila-db-sync-h5bsm" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.598650 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-job-config-data\") pod \"manila-db-sync-h5bsm\" (UID: \"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea\") " pod="manila-kuttl-tests/manila-db-sync-h5bsm" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.598787 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-config-data\") pod \"manila-db-sync-h5bsm\" (UID: \"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea\") " pod="manila-kuttl-tests/manila-db-sync-h5bsm" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.615496 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjds8\" (UniqueName: \"kubernetes.io/projected/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-kube-api-access-fjds8\") pod \"manila-db-sync-h5bsm\" (UID: \"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea\") " pod="manila-kuttl-tests/manila-db-sync-h5bsm" Jan 26 21:17:10 crc kubenswrapper[4899]: I0126 21:17:10.660650 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-sync-h5bsm" Jan 26 21:17:11 crc kubenswrapper[4899]: I0126 21:17:11.112627 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-db-sync-h5bsm"] Jan 26 21:17:11 crc kubenswrapper[4899]: I0126 21:17:11.181700 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-sync-h5bsm" event={"ID":"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea","Type":"ContainerStarted","Data":"a83ef70377e609e6d9ee65caafc405c435ad5d4ea36643d5ba8ee1de8c80aba0"} Jan 26 21:17:12 crc kubenswrapper[4899]: I0126 21:17:12.194993 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-sync-h5bsm" event={"ID":"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea","Type":"ContainerStarted","Data":"aeb53d649d1a3c83fc69fef47171a4125505527c9b41b5aaa51f7ffb156ca8ec"} Jan 26 21:17:12 crc kubenswrapper[4899]: I0126 21:17:12.209182 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-db-sync-h5bsm" podStartSLOduration=2.209162966 podStartE2EDuration="2.209162966s" podCreationTimestamp="2026-01-26 21:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:17:12.208454106 +0000 UTC m=+1321.590042173" watchObservedRunningTime="2026-01-26 21:17:12.209162966 +0000 UTC m=+1321.590751003" Jan 26 21:17:14 crc kubenswrapper[4899]: I0126 21:17:14.216307 4899 generic.go:334] "Generic (PLEG): container finished" podID="ec2fc933-663d-4ec0-8e51-ea69c8c4ecea" containerID="aeb53d649d1a3c83fc69fef47171a4125505527c9b41b5aaa51f7ffb156ca8ec" exitCode=0 Jan 26 21:17:14 crc kubenswrapper[4899]: I0126 21:17:14.216381 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-sync-h5bsm" event={"ID":"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea","Type":"ContainerDied","Data":"aeb53d649d1a3c83fc69fef47171a4125505527c9b41b5aaa51f7ffb156ca8ec"} Jan 26 21:17:15 crc kubenswrapper[4899]: I0126 21:17:15.542254 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-sync-h5bsm" Jan 26 21:17:15 crc kubenswrapper[4899]: I0126 21:17:15.673293 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-config-data\") pod \"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea\" (UID: \"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea\") " Jan 26 21:17:15 crc kubenswrapper[4899]: I0126 21:17:15.673353 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjds8\" (UniqueName: \"kubernetes.io/projected/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-kube-api-access-fjds8\") pod \"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea\" (UID: \"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea\") " Jan 26 21:17:15 crc kubenswrapper[4899]: I0126 21:17:15.673525 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-job-config-data\") pod \"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea\" (UID: \"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea\") " Jan 26 21:17:15 crc kubenswrapper[4899]: I0126 21:17:15.679256 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "ec2fc933-663d-4ec0-8e51-ea69c8c4ecea" (UID: "ec2fc933-663d-4ec0-8e51-ea69c8c4ecea"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:15 crc kubenswrapper[4899]: I0126 21:17:15.679995 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-kube-api-access-fjds8" (OuterVolumeSpecName: "kube-api-access-fjds8") pod "ec2fc933-663d-4ec0-8e51-ea69c8c4ecea" (UID: "ec2fc933-663d-4ec0-8e51-ea69c8c4ecea"). InnerVolumeSpecName "kube-api-access-fjds8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:17:15 crc kubenswrapper[4899]: I0126 21:17:15.681712 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-config-data" (OuterVolumeSpecName: "config-data") pod "ec2fc933-663d-4ec0-8e51-ea69c8c4ecea" (UID: "ec2fc933-663d-4ec0-8e51-ea69c8c4ecea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:17:15 crc kubenswrapper[4899]: I0126 21:17:15.774741 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:15 crc kubenswrapper[4899]: I0126 21:17:15.774790 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjds8\" (UniqueName: \"kubernetes.io/projected/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-kube-api-access-fjds8\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:15 crc kubenswrapper[4899]: I0126 21:17:15.774803 4899 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.231142 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-db-sync-h5bsm" event={"ID":"ec2fc933-663d-4ec0-8e51-ea69c8c4ecea","Type":"ContainerDied","Data":"a83ef70377e609e6d9ee65caafc405c435ad5d4ea36643d5ba8ee1de8c80aba0"} Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.231193 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a83ef70377e609e6d9ee65caafc405c435ad5d4ea36643d5ba8ee1de8c80aba0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.231265 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-db-sync-h5bsm" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.560828 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:17:16 crc kubenswrapper[4899]: E0126 21:17:16.561381 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2fc933-663d-4ec0-8e51-ea69c8c4ecea" containerName="manila-db-sync" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.561394 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2fc933-663d-4ec0-8e51-ea69c8c4ecea" containerName="manila-db-sync" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.561534 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec2fc933-663d-4ec0-8e51-ea69c8c4ecea" containerName="manila-db-sync" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.562190 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.565385 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-scripts" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.568902 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-manila-dockercfg-bbwjx" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.569032 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-config-data" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.569145 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-scheduler-config-data" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.579757 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.631120 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.632609 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.634373 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"ceph-conf-files" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.634576 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-share-share0-config-data" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.639474 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.640903 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.645415 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-api-config-data" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.650954 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.666141 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.687257 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.687294 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-config-data\") pod \"manila-scheduler-0\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.687337 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-scripts\") pod \"manila-scheduler-0\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.687392 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6ktq\" (UniqueName: \"kubernetes.io/projected/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-kube-api-access-m6ktq\") pod \"manila-scheduler-0\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.687428 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.788906 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-scripts\") pod \"manila-scheduler-0\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.788993 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0afcea16-5821-4243-b580-89ca3cf9945b-etc-machine-id\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.789029 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-config-data-custom\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.789072 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml9fk\" (UniqueName: \"kubernetes.io/projected/efce6933-44f2-4bef-8b6f-921cf0a31371-kube-api-access-ml9fk\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.789096 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-scripts\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.789122 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp8pf\" (UniqueName: \"kubernetes.io/projected/0afcea16-5821-4243-b580-89ca3cf9945b-kube-api-access-cp8pf\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.789148 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/efce6933-44f2-4bef-8b6f-921cf0a31371-etc-machine-id\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.789172 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6ktq\" (UniqueName: \"kubernetes.io/projected/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-kube-api-access-m6ktq\") pod \"manila-scheduler-0\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.789195 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-ceph\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.789233 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.789255 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-config-data\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.789293 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-config-data\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.789320 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0afcea16-5821-4243-b580-89ca3cf9945b-logs\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.789357 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.789388 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-config-data\") pod \"manila-scheduler-0\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.789421 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-scripts\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.789445 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/efce6933-44f2-4bef-8b6f-921cf0a31371-var-lib-manila\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.789481 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-config-data-custom\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.790358 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.795658 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-scripts\") pod \"manila-scheduler-0\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.795740 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.811508 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6ktq\" (UniqueName: \"kubernetes.io/projected/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-kube-api-access-m6ktq\") pod \"manila-scheduler-0\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.813898 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-config-data\") pod \"manila-scheduler-0\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.884370 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.891125 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-scripts\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.891162 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/efce6933-44f2-4bef-8b6f-921cf0a31371-var-lib-manila\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.891195 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-config-data-custom\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.891217 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0afcea16-5821-4243-b580-89ca3cf9945b-etc-machine-id\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.891239 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-config-data-custom\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.891268 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml9fk\" (UniqueName: \"kubernetes.io/projected/efce6933-44f2-4bef-8b6f-921cf0a31371-kube-api-access-ml9fk\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.891281 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-scripts\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.891302 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp8pf\" (UniqueName: \"kubernetes.io/projected/0afcea16-5821-4243-b580-89ca3cf9945b-kube-api-access-cp8pf\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.891324 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/efce6933-44f2-4bef-8b6f-921cf0a31371-etc-machine-id\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.891357 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-ceph\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.891403 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-config-data\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.891431 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-config-data\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.891445 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0afcea16-5821-4243-b580-89ca3cf9945b-logs\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.891761 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/efce6933-44f2-4bef-8b6f-921cf0a31371-etc-machine-id\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.891805 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0afcea16-5821-4243-b580-89ca3cf9945b-logs\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.891973 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0afcea16-5821-4243-b580-89ca3cf9945b-etc-machine-id\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.892140 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/efce6933-44f2-4bef-8b6f-921cf0a31371-var-lib-manila\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.895309 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-scripts\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.895943 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-config-data-custom\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.896094 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-ceph\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:16 crc kubenswrapper[4899]: I0126 21:17:16.896364 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-config-data\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:17 crc kubenswrapper[4899]: I0126 21:17:17.170061 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-scripts\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:17 crc kubenswrapper[4899]: I0126 21:17:17.170361 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-config-data-custom\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:17 crc kubenswrapper[4899]: I0126 21:17:17.170987 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-config-data\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:17 crc kubenswrapper[4899]: I0126 21:17:17.172853 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp8pf\" (UniqueName: \"kubernetes.io/projected/0afcea16-5821-4243-b580-89ca3cf9945b-kube-api-access-cp8pf\") pod \"manila-api-0\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:17 crc kubenswrapper[4899]: I0126 21:17:17.188915 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml9fk\" (UniqueName: \"kubernetes.io/projected/efce6933-44f2-4bef-8b6f-921cf0a31371-kube-api-access-ml9fk\") pod \"manila-share-share0-0\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:17 crc kubenswrapper[4899]: I0126 21:17:17.278414 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:17 crc kubenswrapper[4899]: I0126 21:17:17.308081 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:17 crc kubenswrapper[4899]: I0126 21:17:17.415080 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:17:17 crc kubenswrapper[4899]: I0126 21:17:17.677645 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:17:17 crc kubenswrapper[4899]: W0126 21:17:17.680863 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0afcea16_5821_4243_b580_89ca3cf9945b.slice/crio-a2d1bd144f646cdf67a76683d7d88300b9880e9f88b9b0ba7bba10d8f81def2a WatchSource:0}: Error finding container a2d1bd144f646cdf67a76683d7d88300b9880e9f88b9b0ba7bba10d8f81def2a: Status 404 returned error can't find the container with id a2d1bd144f646cdf67a76683d7d88300b9880e9f88b9b0ba7bba10d8f81def2a Jan 26 21:17:17 crc kubenswrapper[4899]: I0126 21:17:17.736721 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:17:17 crc kubenswrapper[4899]: W0126 21:17:17.775451 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefce6933_44f2_4bef_8b6f_921cf0a31371.slice/crio-da268ea07988a13ed0a6080b7b9339b0ba0a24fc02ff47c1cb2cb3dde9c9c540 WatchSource:0}: Error finding container da268ea07988a13ed0a6080b7b9339b0ba0a24fc02ff47c1cb2cb3dde9c9c540: Status 404 returned error can't find the container with id da268ea07988a13ed0a6080b7b9339b0ba0a24fc02ff47c1cb2cb3dde9c9c540 Jan 26 21:17:18 crc kubenswrapper[4899]: I0126 21:17:18.275515 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"8e9769e3-7fe3-4643-8ee9-c5557476b5cd","Type":"ContainerStarted","Data":"9dbda24391c61ea950c9586c692dc77f8ca4049c1f142fa13d3d1a96f429e135"} Jan 26 21:17:18 crc kubenswrapper[4899]: I0126 21:17:18.275848 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"8e9769e3-7fe3-4643-8ee9-c5557476b5cd","Type":"ContainerStarted","Data":"c0355419e35073dbc33152f1c860a5d546603786929f5f210f61034e74de73fb"} Jan 26 21:17:18 crc kubenswrapper[4899]: I0126 21:17:18.276785 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"0afcea16-5821-4243-b580-89ca3cf9945b","Type":"ContainerStarted","Data":"a2d1bd144f646cdf67a76683d7d88300b9880e9f88b9b0ba7bba10d8f81def2a"} Jan 26 21:17:18 crc kubenswrapper[4899]: I0126 21:17:18.279612 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"efce6933-44f2-4bef-8b6f-921cf0a31371","Type":"ContainerStarted","Data":"da268ea07988a13ed0a6080b7b9339b0ba0a24fc02ff47c1cb2cb3dde9c9c540"} Jan 26 21:17:19 crc kubenswrapper[4899]: I0126 21:17:19.287155 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"efce6933-44f2-4bef-8b6f-921cf0a31371","Type":"ContainerStarted","Data":"d3d5a087dc23e610f5e245e2202ab08dbdbc8a6347bb82a3153138a73fd06275"} Jan 26 21:17:19 crc kubenswrapper[4899]: I0126 21:17:19.288582 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"8e9769e3-7fe3-4643-8ee9-c5557476b5cd","Type":"ContainerStarted","Data":"431f2a58d805ecac94be4a17744c7699fd791628da79d4b922c464716fd5e926"} Jan 26 21:17:19 crc kubenswrapper[4899]: I0126 21:17:19.297709 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"0afcea16-5821-4243-b580-89ca3cf9945b","Type":"ContainerStarted","Data":"a3171e9f1dac38d1f698ffcd71582ed7055ab8d2dcd3af89f473ae4c4a671c4c"} Jan 26 21:17:19 crc kubenswrapper[4899]: I0126 21:17:19.313531 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-scheduler-0" podStartSLOduration=3.313511673 podStartE2EDuration="3.313511673s" podCreationTimestamp="2026-01-26 21:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:17:19.308139639 +0000 UTC m=+1328.689727676" watchObservedRunningTime="2026-01-26 21:17:19.313511673 +0000 UTC m=+1328.695099710" Jan 26 21:17:20 crc kubenswrapper[4899]: I0126 21:17:20.307700 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"efce6933-44f2-4bef-8b6f-921cf0a31371","Type":"ContainerStarted","Data":"0acac2cca47264dc2432d8f4c515a2b55ded2e13ac9dffb1cc4bd7bbf9a6583b"} Jan 26 21:17:20 crc kubenswrapper[4899]: I0126 21:17:20.310368 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"0afcea16-5821-4243-b580-89ca3cf9945b","Type":"ContainerStarted","Data":"eb1803883bcb98218eb9ce1230f655c7e605d88a66932a62eb73a5cf6385934a"} Jan 26 21:17:20 crc kubenswrapper[4899]: I0126 21:17:20.310558 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:20 crc kubenswrapper[4899]: I0126 21:17:20.327784 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-share-share0-0" podStartSLOduration=4.3277628870000004 podStartE2EDuration="4.327762887s" podCreationTimestamp="2026-01-26 21:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:17:20.325387229 +0000 UTC m=+1329.706975276" watchObservedRunningTime="2026-01-26 21:17:20.327762887 +0000 UTC m=+1329.709350924" Jan 26 21:17:20 crc kubenswrapper[4899]: I0126 21:17:20.346314 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-api-0" podStartSLOduration=4.346293486 podStartE2EDuration="4.346293486s" podCreationTimestamp="2026-01-26 21:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:17:20.344753502 +0000 UTC m=+1329.726341549" watchObservedRunningTime="2026-01-26 21:17:20.346293486 +0000 UTC m=+1329.727881533" Jan 26 21:17:26 crc kubenswrapper[4899]: I0126 21:17:26.884860 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:27 crc kubenswrapper[4899]: I0126 21:17:27.278959 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:30 crc kubenswrapper[4899]: I0126 21:17:30.109242 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:17:30 crc kubenswrapper[4899]: I0126 21:17:30.109600 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:17:38 crc kubenswrapper[4899]: I0126 21:17:38.724510 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:17:38 crc kubenswrapper[4899]: I0126 21:17:38.732973 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:17:39 crc kubenswrapper[4899]: I0126 21:17:39.147507 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:17:40 crc kubenswrapper[4899]: I0126 21:17:40.880343 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-share-share1-0"] Jan 26 21:17:40 crc kubenswrapper[4899]: I0126 21:17:40.881402 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:40 crc kubenswrapper[4899]: I0126 21:17:40.883344 4899 reflector.go:368] Caches populated for *v1.Secret from object-"manila-kuttl-tests"/"manila-share-share1-config-data" Jan 26 21:17:40 crc kubenswrapper[4899]: I0126 21:17:40.901398 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-share-share1-0"] Jan 26 21:17:40 crc kubenswrapper[4899]: I0126 21:17:40.944230 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-ceph\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:40 crc kubenswrapper[4899]: I0126 21:17:40.944529 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8aeeeb25-090e-413f-b317-9b41061148c8-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:40 crc kubenswrapper[4899]: I0126 21:17:40.944590 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-config-data\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:40 crc kubenswrapper[4899]: I0126 21:17:40.944628 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:40 crc kubenswrapper[4899]: I0126 21:17:40.944734 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmm7g\" (UniqueName: \"kubernetes.io/projected/8aeeeb25-090e-413f-b317-9b41061148c8-kube-api-access-xmm7g\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:40 crc kubenswrapper[4899]: I0126 21:17:40.944765 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/8aeeeb25-090e-413f-b317-9b41061148c8-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:40 crc kubenswrapper[4899]: I0126 21:17:40.944829 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-scripts\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:41 crc kubenswrapper[4899]: I0126 21:17:41.046251 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:41 crc kubenswrapper[4899]: I0126 21:17:41.046324 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/8aeeeb25-090e-413f-b317-9b41061148c8-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:41 crc kubenswrapper[4899]: I0126 21:17:41.046342 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmm7g\" (UniqueName: \"kubernetes.io/projected/8aeeeb25-090e-413f-b317-9b41061148c8-kube-api-access-xmm7g\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:41 crc kubenswrapper[4899]: I0126 21:17:41.046365 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-scripts\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:41 crc kubenswrapper[4899]: I0126 21:17:41.046385 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-ceph\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:41 crc kubenswrapper[4899]: I0126 21:17:41.046457 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8aeeeb25-090e-413f-b317-9b41061148c8-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:41 crc kubenswrapper[4899]: I0126 21:17:41.046482 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-config-data\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:41 crc kubenswrapper[4899]: I0126 21:17:41.046512 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/8aeeeb25-090e-413f-b317-9b41061148c8-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:41 crc kubenswrapper[4899]: I0126 21:17:41.047001 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8aeeeb25-090e-413f-b317-9b41061148c8-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:41 crc kubenswrapper[4899]: I0126 21:17:41.055039 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-scripts\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:41 crc kubenswrapper[4899]: I0126 21:17:41.055279 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-ceph\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:41 crc kubenswrapper[4899]: I0126 21:17:41.055353 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:41 crc kubenswrapper[4899]: I0126 21:17:41.055623 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-config-data\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:41 crc kubenswrapper[4899]: I0126 21:17:41.062976 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmm7g\" (UniqueName: \"kubernetes.io/projected/8aeeeb25-090e-413f-b317-9b41061148c8-kube-api-access-xmm7g\") pod \"manila-share-share1-0\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:41 crc kubenswrapper[4899]: I0126 21:17:41.198015 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:17:41 crc kubenswrapper[4899]: I0126 21:17:41.675595 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-share-share1-0"] Jan 26 21:17:42 crc kubenswrapper[4899]: I0126 21:17:42.485858 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share1-0" event={"ID":"8aeeeb25-090e-413f-b317-9b41061148c8","Type":"ContainerStarted","Data":"90008c23df15d5d6c5bcbecb9604bb019ba06ea76b39455279a2009090e5eb3f"} Jan 26 21:17:42 crc kubenswrapper[4899]: I0126 21:17:42.486203 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share1-0" event={"ID":"8aeeeb25-090e-413f-b317-9b41061148c8","Type":"ContainerStarted","Data":"aa3d23a9c5a21be00a83b968d13973dc83704ac3c34cf9e3b925232f155a082b"} Jan 26 21:17:44 crc kubenswrapper[4899]: I0126 21:17:44.518007 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share1-0" event={"ID":"8aeeeb25-090e-413f-b317-9b41061148c8","Type":"ContainerStarted","Data":"5554cf85689e139057cc7e5b49836dda92bb9f76ff247c07daa036f33d38f51c"} Jan 26 21:17:44 crc kubenswrapper[4899]: I0126 21:17:44.546995 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-share-share1-0" podStartSLOduration=4.546977218 podStartE2EDuration="4.546977218s" podCreationTimestamp="2026-01-26 21:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:17:44.544673292 +0000 UTC m=+1353.926261319" watchObservedRunningTime="2026-01-26 21:17:44.546977218 +0000 UTC m=+1353.928565255" Jan 26 21:17:51 crc kubenswrapper[4899]: I0126 21:17:51.199082 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:18:00 crc kubenswrapper[4899]: I0126 21:18:00.108873 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:18:00 crc kubenswrapper[4899]: I0126 21:18:00.109606 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:18:00 crc kubenswrapper[4899]: I0126 21:18:00.109660 4899 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 21:18:00 crc kubenswrapper[4899]: I0126 21:18:00.110384 4899 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b003bc5d33f730ffb57f781e8537058a3b7ee2bda8e0f8bdef749775797532a8"} pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 21:18:00 crc kubenswrapper[4899]: I0126 21:18:00.110473 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" containerID="cri-o://b003bc5d33f730ffb57f781e8537058a3b7ee2bda8e0f8bdef749775797532a8" gracePeriod=600 Jan 26 21:18:00 crc kubenswrapper[4899]: I0126 21:18:00.639734 4899 generic.go:334] "Generic (PLEG): container finished" podID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerID="b003bc5d33f730ffb57f781e8537058a3b7ee2bda8e0f8bdef749775797532a8" exitCode=0 Jan 26 21:18:00 crc kubenswrapper[4899]: I0126 21:18:00.639783 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerDied","Data":"b003bc5d33f730ffb57f781e8537058a3b7ee2bda8e0f8bdef749775797532a8"} Jan 26 21:18:00 crc kubenswrapper[4899]: I0126 21:18:00.639827 4899 scope.go:117] "RemoveContainer" containerID="4c6a068c1dcea571cec247005b623b6639c13ba7d6fb0ff472c9f5743612c521" Jan 26 21:18:01 crc kubenswrapper[4899]: I0126 21:18:01.654482 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerStarted","Data":"1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d"} Jan 26 21:18:03 crc kubenswrapper[4899]: I0126 21:18:03.236107 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:18:03 crc kubenswrapper[4899]: I0126 21:18:03.886722 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:18:03 crc kubenswrapper[4899]: I0126 21:18:03.887333 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-share-share0-0" podUID="efce6933-44f2-4bef-8b6f-921cf0a31371" containerName="manila-share" containerID="cri-o://d3d5a087dc23e610f5e245e2202ab08dbdbc8a6347bb82a3153138a73fd06275" gracePeriod=30 Jan 26 21:18:03 crc kubenswrapper[4899]: I0126 21:18:03.887408 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-share-share0-0" podUID="efce6933-44f2-4bef-8b6f-921cf0a31371" containerName="probe" containerID="cri-o://0acac2cca47264dc2432d8f4c515a2b55ded2e13ac9dffb1cc4bd7bbf9a6583b" gracePeriod=30 Jan 26 21:18:04 crc kubenswrapper[4899]: I0126 21:18:04.678149 4899 generic.go:334] "Generic (PLEG): container finished" podID="efce6933-44f2-4bef-8b6f-921cf0a31371" containerID="0acac2cca47264dc2432d8f4c515a2b55ded2e13ac9dffb1cc4bd7bbf9a6583b" exitCode=0 Jan 26 21:18:04 crc kubenswrapper[4899]: I0126 21:18:04.678187 4899 generic.go:334] "Generic (PLEG): container finished" podID="efce6933-44f2-4bef-8b6f-921cf0a31371" containerID="d3d5a087dc23e610f5e245e2202ab08dbdbc8a6347bb82a3153138a73fd06275" exitCode=1 Jan 26 21:18:04 crc kubenswrapper[4899]: I0126 21:18:04.678214 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"efce6933-44f2-4bef-8b6f-921cf0a31371","Type":"ContainerDied","Data":"0acac2cca47264dc2432d8f4c515a2b55ded2e13ac9dffb1cc4bd7bbf9a6583b"} Jan 26 21:18:04 crc kubenswrapper[4899]: I0126 21:18:04.678267 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"efce6933-44f2-4bef-8b6f-921cf0a31371","Type":"ContainerDied","Data":"d3d5a087dc23e610f5e245e2202ab08dbdbc8a6347bb82a3153138a73fd06275"} Jan 26 21:18:04 crc kubenswrapper[4899]: I0126 21:18:04.891032 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.078582 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-scripts\") pod \"efce6933-44f2-4bef-8b6f-921cf0a31371\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.078692 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml9fk\" (UniqueName: \"kubernetes.io/projected/efce6933-44f2-4bef-8b6f-921cf0a31371-kube-api-access-ml9fk\") pod \"efce6933-44f2-4bef-8b6f-921cf0a31371\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.078725 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/efce6933-44f2-4bef-8b6f-921cf0a31371-etc-machine-id\") pod \"efce6933-44f2-4bef-8b6f-921cf0a31371\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.078753 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-config-data-custom\") pod \"efce6933-44f2-4bef-8b6f-921cf0a31371\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.078786 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-config-data\") pod \"efce6933-44f2-4bef-8b6f-921cf0a31371\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.078846 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/efce6933-44f2-4bef-8b6f-921cf0a31371-var-lib-manila\") pod \"efce6933-44f2-4bef-8b6f-921cf0a31371\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.078888 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-ceph\") pod \"efce6933-44f2-4bef-8b6f-921cf0a31371\" (UID: \"efce6933-44f2-4bef-8b6f-921cf0a31371\") " Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.078880 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efce6933-44f2-4bef-8b6f-921cf0a31371-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "efce6933-44f2-4bef-8b6f-921cf0a31371" (UID: "efce6933-44f2-4bef-8b6f-921cf0a31371"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.079127 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efce6933-44f2-4bef-8b6f-921cf0a31371-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "efce6933-44f2-4bef-8b6f-921cf0a31371" (UID: "efce6933-44f2-4bef-8b6f-921cf0a31371"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.079195 4899 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/efce6933-44f2-4bef-8b6f-921cf0a31371-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.084464 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-ceph" (OuterVolumeSpecName: "ceph") pod "efce6933-44f2-4bef-8b6f-921cf0a31371" (UID: "efce6933-44f2-4bef-8b6f-921cf0a31371"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.085037 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "efce6933-44f2-4bef-8b6f-921cf0a31371" (UID: "efce6933-44f2-4bef-8b6f-921cf0a31371"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.090081 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-scripts" (OuterVolumeSpecName: "scripts") pod "efce6933-44f2-4bef-8b6f-921cf0a31371" (UID: "efce6933-44f2-4bef-8b6f-921cf0a31371"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.095680 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efce6933-44f2-4bef-8b6f-921cf0a31371-kube-api-access-ml9fk" (OuterVolumeSpecName: "kube-api-access-ml9fk") pod "efce6933-44f2-4bef-8b6f-921cf0a31371" (UID: "efce6933-44f2-4bef-8b6f-921cf0a31371"). InnerVolumeSpecName "kube-api-access-ml9fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.142981 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-config-data" (OuterVolumeSpecName: "config-data") pod "efce6933-44f2-4bef-8b6f-921cf0a31371" (UID: "efce6933-44f2-4bef-8b6f-921cf0a31371"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.180174 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml9fk\" (UniqueName: \"kubernetes.io/projected/efce6933-44f2-4bef-8b6f-921cf0a31371-kube-api-access-ml9fk\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.180219 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.180232 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.180246 4899 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/efce6933-44f2-4bef-8b6f-921cf0a31371-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.180256 4899 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.180264 4899 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efce6933-44f2-4bef-8b6f-921cf0a31371-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.687296 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share0-0" event={"ID":"efce6933-44f2-4bef-8b6f-921cf0a31371","Type":"ContainerDied","Data":"da268ea07988a13ed0a6080b7b9339b0ba0a24fc02ff47c1cb2cb3dde9c9c540"} Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.687419 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-share-share0-0" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.687685 4899 scope.go:117] "RemoveContainer" containerID="0acac2cca47264dc2432d8f4c515a2b55ded2e13ac9dffb1cc4bd7bbf9a6583b" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.712499 4899 scope.go:117] "RemoveContainer" containerID="d3d5a087dc23e610f5e245e2202ab08dbdbc8a6347bb82a3153138a73fd06275" Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.735728 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:18:05 crc kubenswrapper[4899]: I0126 21:18:05.745880 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-share-share0-0"] Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.385932 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c"] Jan 26 21:18:06 crc kubenswrapper[4899]: E0126 21:18:06.386349 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efce6933-44f2-4bef-8b6f-921cf0a31371" containerName="probe" Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.386372 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="efce6933-44f2-4bef-8b6f-921cf0a31371" containerName="probe" Jan 26 21:18:06 crc kubenswrapper[4899]: E0126 21:18:06.386394 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efce6933-44f2-4bef-8b6f-921cf0a31371" containerName="manila-share" Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.386413 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="efce6933-44f2-4bef-8b6f-921cf0a31371" containerName="manila-share" Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.386592 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="efce6933-44f2-4bef-8b6f-921cf0a31371" containerName="manila-share" Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.386612 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="efce6933-44f2-4bef-8b6f-921cf0a31371" containerName="probe" Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.387219 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.401236 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c"] Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.405113 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfa7325-0ae2-44cb-9523-21010e9af015-config-data\") pod \"manila-service-cleanup-n5b5h655-9kt7c\" (UID: \"fdfa7325-0ae2-44cb-9523-21010e9af015\") " pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.405185 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fdfa7325-0ae2-44cb-9523-21010e9af015-job-config-data\") pod \"manila-service-cleanup-n5b5h655-9kt7c\" (UID: \"fdfa7325-0ae2-44cb-9523-21010e9af015\") " pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.405236 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw87l\" (UniqueName: \"kubernetes.io/projected/fdfa7325-0ae2-44cb-9523-21010e9af015-kube-api-access-mw87l\") pod \"manila-service-cleanup-n5b5h655-9kt7c\" (UID: \"fdfa7325-0ae2-44cb-9523-21010e9af015\") " pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.505694 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfa7325-0ae2-44cb-9523-21010e9af015-config-data\") pod \"manila-service-cleanup-n5b5h655-9kt7c\" (UID: \"fdfa7325-0ae2-44cb-9523-21010e9af015\") " pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.505744 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fdfa7325-0ae2-44cb-9523-21010e9af015-job-config-data\") pod \"manila-service-cleanup-n5b5h655-9kt7c\" (UID: \"fdfa7325-0ae2-44cb-9523-21010e9af015\") " pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.505793 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw87l\" (UniqueName: \"kubernetes.io/projected/fdfa7325-0ae2-44cb-9523-21010e9af015-kube-api-access-mw87l\") pod \"manila-service-cleanup-n5b5h655-9kt7c\" (UID: \"fdfa7325-0ae2-44cb-9523-21010e9af015\") " pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.511972 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fdfa7325-0ae2-44cb-9523-21010e9af015-job-config-data\") pod \"manila-service-cleanup-n5b5h655-9kt7c\" (UID: \"fdfa7325-0ae2-44cb-9523-21010e9af015\") " pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.512161 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfa7325-0ae2-44cb-9523-21010e9af015-config-data\") pod \"manila-service-cleanup-n5b5h655-9kt7c\" (UID: \"fdfa7325-0ae2-44cb-9523-21010e9af015\") " pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.525575 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw87l\" (UniqueName: \"kubernetes.io/projected/fdfa7325-0ae2-44cb-9523-21010e9af015-kube-api-access-mw87l\") pod \"manila-service-cleanup-n5b5h655-9kt7c\" (UID: \"fdfa7325-0ae2-44cb-9523-21010e9af015\") " pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.705041 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.902966 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c"] Jan 26 21:18:06 crc kubenswrapper[4899]: I0126 21:18:06.942870 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efce6933-44f2-4bef-8b6f-921cf0a31371" path="/var/lib/kubelet/pods/efce6933-44f2-4bef-8b6f-921cf0a31371/volumes" Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.017227 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c"] Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.033129 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-db-sync-h5bsm"] Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.046553 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-db-sync-h5bsm"] Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.052528 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.052829 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-scheduler-0" podUID="8e9769e3-7fe3-4643-8ee9-c5557476b5cd" containerName="manila-scheduler" containerID="cri-o://9dbda24391c61ea950c9586c692dc77f8ca4049c1f142fa13d3d1a96f429e135" gracePeriod=30 Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.053024 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-scheduler-0" podUID="8e9769e3-7fe3-4643-8ee9-c5557476b5cd" containerName="probe" containerID="cri-o://431f2a58d805ecac94be4a17744c7699fd791628da79d4b922c464716fd5e926" gracePeriod=30 Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.072092 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/manila4115-account-delete-dw6wv"] Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.072982 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila4115-account-delete-dw6wv" Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.092490 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila4115-account-delete-dw6wv"] Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.118068 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-share-share1-0"] Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.118319 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-share-share1-0" podUID="8aeeeb25-090e-413f-b317-9b41061148c8" containerName="manila-share" containerID="cri-o://90008c23df15d5d6c5bcbecb9604bb019ba06ea76b39455279a2009090e5eb3f" gracePeriod=30 Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.118443 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-share-share1-0" podUID="8aeeeb25-090e-413f-b317-9b41061148c8" containerName="probe" containerID="cri-o://5554cf85689e139057cc7e5b49836dda92bb9f76ff247c07daa036f33d38f51c" gracePeriod=30 Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.165058 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.165288 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-api-0" podUID="0afcea16-5821-4243-b580-89ca3cf9945b" containerName="manila-api-log" containerID="cri-o://a3171e9f1dac38d1f698ffcd71582ed7055ab8d2dcd3af89f473ae4c4a671c4c" gracePeriod=30 Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.165652 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-api-0" podUID="0afcea16-5821-4243-b580-89ca3cf9945b" containerName="manila-api" containerID="cri-o://eb1803883bcb98218eb9ce1230f655c7e605d88a66932a62eb73a5cf6385934a" gracePeriod=30 Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.216405 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l52l8\" (UniqueName: \"kubernetes.io/projected/c8e08810-31e8-4b29-9835-fba465d64f68-kube-api-access-l52l8\") pod \"manila4115-account-delete-dw6wv\" (UID: \"c8e08810-31e8-4b29-9835-fba465d64f68\") " pod="manila-kuttl-tests/manila4115-account-delete-dw6wv" Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.216590 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8e08810-31e8-4b29-9835-fba465d64f68-operator-scripts\") pod \"manila4115-account-delete-dw6wv\" (UID: \"c8e08810-31e8-4b29-9835-fba465d64f68\") " pod="manila-kuttl-tests/manila4115-account-delete-dw6wv" Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.317142 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8e08810-31e8-4b29-9835-fba465d64f68-operator-scripts\") pod \"manila4115-account-delete-dw6wv\" (UID: \"c8e08810-31e8-4b29-9835-fba465d64f68\") " pod="manila-kuttl-tests/manila4115-account-delete-dw6wv" Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.317191 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l52l8\" (UniqueName: \"kubernetes.io/projected/c8e08810-31e8-4b29-9835-fba465d64f68-kube-api-access-l52l8\") pod \"manila4115-account-delete-dw6wv\" (UID: \"c8e08810-31e8-4b29-9835-fba465d64f68\") " pod="manila-kuttl-tests/manila4115-account-delete-dw6wv" Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.318287 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8e08810-31e8-4b29-9835-fba465d64f68-operator-scripts\") pod \"manila4115-account-delete-dw6wv\" (UID: \"c8e08810-31e8-4b29-9835-fba465d64f68\") " pod="manila-kuttl-tests/manila4115-account-delete-dw6wv" Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.339944 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l52l8\" (UniqueName: \"kubernetes.io/projected/c8e08810-31e8-4b29-9835-fba465d64f68-kube-api-access-l52l8\") pod \"manila4115-account-delete-dw6wv\" (UID: \"c8e08810-31e8-4b29-9835-fba465d64f68\") " pod="manila-kuttl-tests/manila4115-account-delete-dw6wv" Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.398381 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila4115-account-delete-dw6wv" Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.705853 4899 generic.go:334] "Generic (PLEG): container finished" podID="0afcea16-5821-4243-b580-89ca3cf9945b" containerID="a3171e9f1dac38d1f698ffcd71582ed7055ab8d2dcd3af89f473ae4c4a671c4c" exitCode=143 Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.705964 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"0afcea16-5821-4243-b580-89ca3cf9945b","Type":"ContainerDied","Data":"a3171e9f1dac38d1f698ffcd71582ed7055ab8d2dcd3af89f473ae4c4a671c4c"} Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.708656 4899 generic.go:334] "Generic (PLEG): container finished" podID="8aeeeb25-090e-413f-b317-9b41061148c8" containerID="5554cf85689e139057cc7e5b49836dda92bb9f76ff247c07daa036f33d38f51c" exitCode=0 Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.708685 4899 generic.go:334] "Generic (PLEG): container finished" podID="8aeeeb25-090e-413f-b317-9b41061148c8" containerID="90008c23df15d5d6c5bcbecb9604bb019ba06ea76b39455279a2009090e5eb3f" exitCode=1 Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.708739 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share1-0" event={"ID":"8aeeeb25-090e-413f-b317-9b41061148c8","Type":"ContainerDied","Data":"5554cf85689e139057cc7e5b49836dda92bb9f76ff247c07daa036f33d38f51c"} Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.708770 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share1-0" event={"ID":"8aeeeb25-090e-413f-b317-9b41061148c8","Type":"ContainerDied","Data":"90008c23df15d5d6c5bcbecb9604bb019ba06ea76b39455279a2009090e5eb3f"} Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.711241 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" event={"ID":"fdfa7325-0ae2-44cb-9523-21010e9af015","Type":"ContainerStarted","Data":"df21c8e473e4b7eb0f4f9e4436f59d7a9826f580b3af5ecb8e9db00c392a946e"} Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.711283 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" event={"ID":"fdfa7325-0ae2-44cb-9523-21010e9af015","Type":"ContainerStarted","Data":"6d3792446af040c848bd3b5815b6ec95eb9c53918810d47af30900ad168cdc27"} Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.711355 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" podUID="fdfa7325-0ae2-44cb-9523-21010e9af015" containerName="manila-service-cleanup-n5b5h655" containerID="cri-o://df21c8e473e4b7eb0f4f9e4436f59d7a9826f580b3af5ecb8e9db00c392a946e" gracePeriod=30 Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.850194 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" podStartSLOduration=1.850169865 podStartE2EDuration="1.850169865s" podCreationTimestamp="2026-01-26 21:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:18:07.73986661 +0000 UTC m=+1377.121454647" watchObservedRunningTime="2026-01-26 21:18:07.850169865 +0000 UTC m=+1377.231757902" Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.858881 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/manila4115-account-delete-dw6wv"] Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.993144 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/manila-operator-controller-manager-66974747b8-6bs75"] Jan 26 21:18:07 crc kubenswrapper[4899]: I0126 21:18:07.993647 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" podUID="7b93a53e-a97b-4250-9524-332e5b65e329" containerName="manager" containerID="cri-o://77f51a3ebebbf67f87e25a1c7ec0dba5a3656a9387cea8bc5d6b73adee6fcde0" gracePeriod=10 Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.126859 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.230481 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8aeeeb25-090e-413f-b317-9b41061148c8-etc-machine-id\") pod \"8aeeeb25-090e-413f-b317-9b41061148c8\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.230574 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-scripts\") pod \"8aeeeb25-090e-413f-b317-9b41061148c8\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.230608 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-config-data-custom\") pod \"8aeeeb25-090e-413f-b317-9b41061148c8\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.230638 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-ceph\") pod \"8aeeeb25-090e-413f-b317-9b41061148c8\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.230684 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-config-data\") pod \"8aeeeb25-090e-413f-b317-9b41061148c8\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.230795 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmm7g\" (UniqueName: \"kubernetes.io/projected/8aeeeb25-090e-413f-b317-9b41061148c8-kube-api-access-xmm7g\") pod \"8aeeeb25-090e-413f-b317-9b41061148c8\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.230815 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/8aeeeb25-090e-413f-b317-9b41061148c8-var-lib-manila\") pod \"8aeeeb25-090e-413f-b317-9b41061148c8\" (UID: \"8aeeeb25-090e-413f-b317-9b41061148c8\") " Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.231290 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aeeeb25-090e-413f-b317-9b41061148c8-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "8aeeeb25-090e-413f-b317-9b41061148c8" (UID: "8aeeeb25-090e-413f-b317-9b41061148c8"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.232227 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aeeeb25-090e-413f-b317-9b41061148c8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8aeeeb25-090e-413f-b317-9b41061148c8" (UID: "8aeeeb25-090e-413f-b317-9b41061148c8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.239164 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-scripts" (OuterVolumeSpecName: "scripts") pod "8aeeeb25-090e-413f-b317-9b41061148c8" (UID: "8aeeeb25-090e-413f-b317-9b41061148c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.239341 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aeeeb25-090e-413f-b317-9b41061148c8-kube-api-access-xmm7g" (OuterVolumeSpecName: "kube-api-access-xmm7g") pod "8aeeeb25-090e-413f-b317-9b41061148c8" (UID: "8aeeeb25-090e-413f-b317-9b41061148c8"). InnerVolumeSpecName "kube-api-access-xmm7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.239455 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8aeeeb25-090e-413f-b317-9b41061148c8" (UID: "8aeeeb25-090e-413f-b317-9b41061148c8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.242541 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-ceph" (OuterVolumeSpecName: "ceph") pod "8aeeeb25-090e-413f-b317-9b41061148c8" (UID: "8aeeeb25-090e-413f-b317-9b41061148c8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.281613 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/manila-operator-index-c9zzr"] Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.281846 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/manila-operator-index-c9zzr" podUID="39906e7d-94ba-4997-8e46-27d2f18888c9" containerName="registry-server" containerID="cri-o://8d253ac8ec077dd0ff5ae0a4f62b9d9eb1d6356b0db70b336fb80a5bb9036b72" gracePeriod=30 Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.335319 4899 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8aeeeb25-090e-413f-b317-9b41061148c8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.335348 4899 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.335357 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.335366 4899 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.335377 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmm7g\" (UniqueName: \"kubernetes.io/projected/8aeeeb25-090e-413f-b317-9b41061148c8-kube-api-access-xmm7g\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.335386 4899 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/8aeeeb25-090e-413f-b317-9b41061148c8-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.348797 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6"] Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.366120 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/9b50cddac8314eade23b12d2e6208bc2d605a3a7e050d524e1e459fac6mmlg6"] Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.378116 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-config-data" (OuterVolumeSpecName: "config-data") pod "8aeeeb25-090e-413f-b317-9b41061148c8" (UID: "8aeeeb25-090e-413f-b317-9b41061148c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.439009 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aeeeb25-090e-413f-b317-9b41061148c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.488827 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.654092 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b93a53e-a97b-4250-9524-332e5b65e329-webhook-cert\") pod \"7b93a53e-a97b-4250-9524-332e5b65e329\" (UID: \"7b93a53e-a97b-4250-9524-332e5b65e329\") " Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.654238 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b93a53e-a97b-4250-9524-332e5b65e329-apiservice-cert\") pod \"7b93a53e-a97b-4250-9524-332e5b65e329\" (UID: \"7b93a53e-a97b-4250-9524-332e5b65e329\") " Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.654310 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gggdd\" (UniqueName: \"kubernetes.io/projected/7b93a53e-a97b-4250-9524-332e5b65e329-kube-api-access-gggdd\") pod \"7b93a53e-a97b-4250-9524-332e5b65e329\" (UID: \"7b93a53e-a97b-4250-9524-332e5b65e329\") " Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.666267 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b93a53e-a97b-4250-9524-332e5b65e329-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7b93a53e-a97b-4250-9524-332e5b65e329" (UID: "7b93a53e-a97b-4250-9524-332e5b65e329"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.666358 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b93a53e-a97b-4250-9524-332e5b65e329-kube-api-access-gggdd" (OuterVolumeSpecName: "kube-api-access-gggdd") pod "7b93a53e-a97b-4250-9524-332e5b65e329" (UID: "7b93a53e-a97b-4250-9524-332e5b65e329"). InnerVolumeSpecName "kube-api-access-gggdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.676142 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b93a53e-a97b-4250-9524-332e5b65e329-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "7b93a53e-a97b-4250-9524-332e5b65e329" (UID: "7b93a53e-a97b-4250-9524-332e5b65e329"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.722622 4899 generic.go:334] "Generic (PLEG): container finished" podID="c8e08810-31e8-4b29-9835-fba465d64f68" containerID="5c80a218156fd6314d1f4311caf7ea413a9c662fc8cbaf703796cfe62aabc545" exitCode=0 Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.722700 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila4115-account-delete-dw6wv" event={"ID":"c8e08810-31e8-4b29-9835-fba465d64f68","Type":"ContainerDied","Data":"5c80a218156fd6314d1f4311caf7ea413a9c662fc8cbaf703796cfe62aabc545"} Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.722733 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila4115-account-delete-dw6wv" event={"ID":"c8e08810-31e8-4b29-9835-fba465d64f68","Type":"ContainerStarted","Data":"a0e0717bcae53b155feb09d198b3edbdc9e617642b44a578325ce466f05c7236"} Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.724677 4899 generic.go:334] "Generic (PLEG): container finished" podID="7b93a53e-a97b-4250-9524-332e5b65e329" containerID="77f51a3ebebbf67f87e25a1c7ec0dba5a3656a9387cea8bc5d6b73adee6fcde0" exitCode=0 Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.724733 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" event={"ID":"7b93a53e-a97b-4250-9524-332e5b65e329","Type":"ContainerDied","Data":"77f51a3ebebbf67f87e25a1c7ec0dba5a3656a9387cea8bc5d6b73adee6fcde0"} Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.724759 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" event={"ID":"7b93a53e-a97b-4250-9524-332e5b65e329","Type":"ContainerDied","Data":"686c1b300b2d47323fce6cbdc389d97fe4299adeaad01a30002f407fd715f0c7"} Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.724789 4899 scope.go:117] "RemoveContainer" containerID="77f51a3ebebbf67f87e25a1c7ec0dba5a3656a9387cea8bc5d6b73adee6fcde0" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.724919 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-66974747b8-6bs75" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.733490 4899 generic.go:334] "Generic (PLEG): container finished" podID="8e9769e3-7fe3-4643-8ee9-c5557476b5cd" containerID="431f2a58d805ecac94be4a17744c7699fd791628da79d4b922c464716fd5e926" exitCode=0 Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.733598 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"8e9769e3-7fe3-4643-8ee9-c5557476b5cd","Type":"ContainerDied","Data":"431f2a58d805ecac94be4a17744c7699fd791628da79d4b922c464716fd5e926"} Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.752942 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-share-share1-0" event={"ID":"8aeeeb25-090e-413f-b317-9b41061148c8","Type":"ContainerDied","Data":"aa3d23a9c5a21be00a83b968d13973dc83704ac3c34cf9e3b925232f155a082b"} Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.753107 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-share-share1-0" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.755797 4899 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b93a53e-a97b-4250-9524-332e5b65e329-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.755833 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gggdd\" (UniqueName: \"kubernetes.io/projected/7b93a53e-a97b-4250-9524-332e5b65e329-kube-api-access-gggdd\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.755846 4899 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b93a53e-a97b-4250-9524-332e5b65e329-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.760962 4899 scope.go:117] "RemoveContainer" containerID="77f51a3ebebbf67f87e25a1c7ec0dba5a3656a9387cea8bc5d6b73adee6fcde0" Jan 26 21:18:08 crc kubenswrapper[4899]: E0126 21:18:08.765872 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77f51a3ebebbf67f87e25a1c7ec0dba5a3656a9387cea8bc5d6b73adee6fcde0\": container with ID starting with 77f51a3ebebbf67f87e25a1c7ec0dba5a3656a9387cea8bc5d6b73adee6fcde0 not found: ID does not exist" containerID="77f51a3ebebbf67f87e25a1c7ec0dba5a3656a9387cea8bc5d6b73adee6fcde0" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.766199 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f51a3ebebbf67f87e25a1c7ec0dba5a3656a9387cea8bc5d6b73adee6fcde0"} err="failed to get container status \"77f51a3ebebbf67f87e25a1c7ec0dba5a3656a9387cea8bc5d6b73adee6fcde0\": rpc error: code = NotFound desc = could not find container \"77f51a3ebebbf67f87e25a1c7ec0dba5a3656a9387cea8bc5d6b73adee6fcde0\": container with ID starting with 77f51a3ebebbf67f87e25a1c7ec0dba5a3656a9387cea8bc5d6b73adee6fcde0 not found: ID does not exist" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.766402 4899 scope.go:117] "RemoveContainer" containerID="5554cf85689e139057cc7e5b49836dda92bb9f76ff247c07daa036f33d38f51c" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.767656 4899 generic.go:334] "Generic (PLEG): container finished" podID="39906e7d-94ba-4997-8e46-27d2f18888c9" containerID="8d253ac8ec077dd0ff5ae0a4f62b9d9eb1d6356b0db70b336fb80a5bb9036b72" exitCode=0 Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.767707 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-index-c9zzr" event={"ID":"39906e7d-94ba-4997-8e46-27d2f18888c9","Type":"ContainerDied","Data":"8d253ac8ec077dd0ff5ae0a4f62b9d9eb1d6356b0db70b336fb80a5bb9036b72"} Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.867773 4899 scope.go:117] "RemoveContainer" containerID="90008c23df15d5d6c5bcbecb9604bb019ba06ea76b39455279a2009090e5eb3f" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.878640 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-index-c9zzr" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.889604 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-share-share1-0"] Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.897073 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-share-share1-0"] Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.907379 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/manila-operator-controller-manager-66974747b8-6bs75"] Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.913077 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/manila-operator-controller-manager-66974747b8-6bs75"] Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.942179 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="431633bb-098b-4392-908c-d844fc2a9557" path="/var/lib/kubelet/pods/431633bb-098b-4392-908c-d844fc2a9557/volumes" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.942922 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b93a53e-a97b-4250-9524-332e5b65e329" path="/var/lib/kubelet/pods/7b93a53e-a97b-4250-9524-332e5b65e329/volumes" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.943529 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aeeeb25-090e-413f-b317-9b41061148c8" path="/var/lib/kubelet/pods/8aeeeb25-090e-413f-b317-9b41061148c8/volumes" Jan 26 21:18:08 crc kubenswrapper[4899]: I0126 21:18:08.944704 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec2fc933-663d-4ec0-8e51-ea69c8c4ecea" path="/var/lib/kubelet/pods/ec2fc933-663d-4ec0-8e51-ea69c8c4ecea/volumes" Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.071221 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcj4g\" (UniqueName: \"kubernetes.io/projected/39906e7d-94ba-4997-8e46-27d2f18888c9-kube-api-access-vcj4g\") pod \"39906e7d-94ba-4997-8e46-27d2f18888c9\" (UID: \"39906e7d-94ba-4997-8e46-27d2f18888c9\") " Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.075096 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39906e7d-94ba-4997-8e46-27d2f18888c9-kube-api-access-vcj4g" (OuterVolumeSpecName: "kube-api-access-vcj4g") pod "39906e7d-94ba-4997-8e46-27d2f18888c9" (UID: "39906e7d-94ba-4997-8e46-27d2f18888c9"). InnerVolumeSpecName "kube-api-access-vcj4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.172288 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcj4g\" (UniqueName: \"kubernetes.io/projected/39906e7d-94ba-4997-8e46-27d2f18888c9-kube-api-access-vcj4g\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.780646 4899 generic.go:334] "Generic (PLEG): container finished" podID="8e9769e3-7fe3-4643-8ee9-c5557476b5cd" containerID="9dbda24391c61ea950c9586c692dc77f8ca4049c1f142fa13d3d1a96f429e135" exitCode=0 Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.780735 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"8e9769e3-7fe3-4643-8ee9-c5557476b5cd","Type":"ContainerDied","Data":"9dbda24391c61ea950c9586c692dc77f8ca4049c1f142fa13d3d1a96f429e135"} Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.783934 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-index-c9zzr" event={"ID":"39906e7d-94ba-4997-8e46-27d2f18888c9","Type":"ContainerDied","Data":"cec5004d3b5ee58ca1b3b2885d24bf46474b03efb7161899abd71a6432a7b546"} Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.783980 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-index-c9zzr" Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.783992 4899 scope.go:117] "RemoveContainer" containerID="8d253ac8ec077dd0ff5ae0a4f62b9d9eb1d6356b0db70b336fb80a5bb9036b72" Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.831586 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.845013 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/manila-operator-index-c9zzr"] Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.851059 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/manila-operator-index-c9zzr"] Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.981834 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-scripts\") pod \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.982359 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-config-data\") pod \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.982865 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6ktq\" (UniqueName: \"kubernetes.io/projected/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-kube-api-access-m6ktq\") pod \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.982944 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-config-data-custom\") pod \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.982983 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-etc-machine-id\") pod \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\" (UID: \"8e9769e3-7fe3-4643-8ee9-c5557476b5cd\") " Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.983557 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8e9769e3-7fe3-4643-8ee9-c5557476b5cd" (UID: "8e9769e3-7fe3-4643-8ee9-c5557476b5cd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.994682 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-scripts" (OuterVolumeSpecName: "scripts") pod "8e9769e3-7fe3-4643-8ee9-c5557476b5cd" (UID: "8e9769e3-7fe3-4643-8ee9-c5557476b5cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.994843 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8e9769e3-7fe3-4643-8ee9-c5557476b5cd" (UID: "8e9769e3-7fe3-4643-8ee9-c5557476b5cd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:09 crc kubenswrapper[4899]: I0126 21:18:09.997647 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-kube-api-access-m6ktq" (OuterVolumeSpecName: "kube-api-access-m6ktq") pod "8e9769e3-7fe3-4643-8ee9-c5557476b5cd" (UID: "8e9769e3-7fe3-4643-8ee9-c5557476b5cd"). InnerVolumeSpecName "kube-api-access-m6ktq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.028451 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila4115-account-delete-dw6wv" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.066742 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-config-data" (OuterVolumeSpecName: "config-data") pod "8e9769e3-7fe3-4643-8ee9-c5557476b5cd" (UID: "8e9769e3-7fe3-4643-8ee9-c5557476b5cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.084259 4899 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.084289 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.084301 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6ktq\" (UniqueName: \"kubernetes.io/projected/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-kube-api-access-m6ktq\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.084312 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.084322 4899 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e9769e3-7fe3-4643-8ee9-c5557476b5cd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.185512 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l52l8\" (UniqueName: \"kubernetes.io/projected/c8e08810-31e8-4b29-9835-fba465d64f68-kube-api-access-l52l8\") pod \"c8e08810-31e8-4b29-9835-fba465d64f68\" (UID: \"c8e08810-31e8-4b29-9835-fba465d64f68\") " Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.185991 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8e08810-31e8-4b29-9835-fba465d64f68-operator-scripts\") pod \"c8e08810-31e8-4b29-9835-fba465d64f68\" (UID: \"c8e08810-31e8-4b29-9835-fba465d64f68\") " Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.186540 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8e08810-31e8-4b29-9835-fba465d64f68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8e08810-31e8-4b29-9835-fba465d64f68" (UID: "c8e08810-31e8-4b29-9835-fba465d64f68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.191231 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e08810-31e8-4b29-9835-fba465d64f68-kube-api-access-l52l8" (OuterVolumeSpecName: "kube-api-access-l52l8") pod "c8e08810-31e8-4b29-9835-fba465d64f68" (UID: "c8e08810-31e8-4b29-9835-fba465d64f68"). InnerVolumeSpecName "kube-api-access-l52l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.287318 4899 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8e08810-31e8-4b29-9835-fba465d64f68-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.287638 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l52l8\" (UniqueName: \"kubernetes.io/projected/c8e08810-31e8-4b29-9835-fba465d64f68-kube-api-access-l52l8\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.312759 4899 prober.go:107] "Probe failed" probeType="Readiness" pod="manila-kuttl-tests/manila-api-0" podUID="0afcea16-5821-4243-b580-89ca3cf9945b" containerName="manila-api" probeResult="failure" output="Get \"http://10.217.0.111:8786/healthcheck\": read tcp 10.217.0.2:54244->10.217.0.111:8786: read: connection reset by peer" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.646683 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.792308 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp8pf\" (UniqueName: \"kubernetes.io/projected/0afcea16-5821-4243-b580-89ca3cf9945b-kube-api-access-cp8pf\") pod \"0afcea16-5821-4243-b580-89ca3cf9945b\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.792574 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-scripts\") pod \"0afcea16-5821-4243-b580-89ca3cf9945b\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.792615 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-config-data-custom\") pod \"0afcea16-5821-4243-b580-89ca3cf9945b\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.792651 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0afcea16-5821-4243-b580-89ca3cf9945b-etc-machine-id\") pod \"0afcea16-5821-4243-b580-89ca3cf9945b\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.792688 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-config-data\") pod \"0afcea16-5821-4243-b580-89ca3cf9945b\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.792752 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0afcea16-5821-4243-b580-89ca3cf9945b-logs\") pod \"0afcea16-5821-4243-b580-89ca3cf9945b\" (UID: \"0afcea16-5821-4243-b580-89ca3cf9945b\") " Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.793750 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0afcea16-5821-4243-b580-89ca3cf9945b-logs" (OuterVolumeSpecName: "logs") pod "0afcea16-5821-4243-b580-89ca3cf9945b" (UID: "0afcea16-5821-4243-b580-89ca3cf9945b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.795156 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afcea16-5821-4243-b580-89ca3cf9945b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0afcea16-5821-4243-b580-89ca3cf9945b" (UID: "0afcea16-5821-4243-b580-89ca3cf9945b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.798704 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0afcea16-5821-4243-b580-89ca3cf9945b" (UID: "0afcea16-5821-4243-b580-89ca3cf9945b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.806732 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-scripts" (OuterVolumeSpecName: "scripts") pod "0afcea16-5821-4243-b580-89ca3cf9945b" (UID: "0afcea16-5821-4243-b580-89ca3cf9945b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.810972 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0afcea16-5821-4243-b580-89ca3cf9945b-kube-api-access-cp8pf" (OuterVolumeSpecName: "kube-api-access-cp8pf") pod "0afcea16-5821-4243-b580-89ca3cf9945b" (UID: "0afcea16-5821-4243-b580-89ca3cf9945b"). InnerVolumeSpecName "kube-api-access-cp8pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.813454 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila4115-account-delete-dw6wv" event={"ID":"c8e08810-31e8-4b29-9835-fba465d64f68","Type":"ContainerDied","Data":"a0e0717bcae53b155feb09d198b3edbdc9e617642b44a578325ce466f05c7236"} Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.813492 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila4115-account-delete-dw6wv" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.813502 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0e0717bcae53b155feb09d198b3edbdc9e617642b44a578325ce466f05c7236" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.815369 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-scheduler-0" event={"ID":"8e9769e3-7fe3-4643-8ee9-c5557476b5cd","Type":"ContainerDied","Data":"c0355419e35073dbc33152f1c860a5d546603786929f5f210f61034e74de73fb"} Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.815414 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-scheduler-0" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.815422 4899 scope.go:117] "RemoveContainer" containerID="431f2a58d805ecac94be4a17744c7699fd791628da79d4b922c464716fd5e926" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.819359 4899 generic.go:334] "Generic (PLEG): container finished" podID="0afcea16-5821-4243-b580-89ca3cf9945b" containerID="eb1803883bcb98218eb9ce1230f655c7e605d88a66932a62eb73a5cf6385934a" exitCode=0 Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.819401 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"0afcea16-5821-4243-b580-89ca3cf9945b","Type":"ContainerDied","Data":"eb1803883bcb98218eb9ce1230f655c7e605d88a66932a62eb73a5cf6385934a"} Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.819459 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-api-0" event={"ID":"0afcea16-5821-4243-b580-89ca3cf9945b","Type":"ContainerDied","Data":"a2d1bd144f646cdf67a76683d7d88300b9880e9f88b9b0ba7bba10d8f81def2a"} Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.819534 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-api-0" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.832874 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-config-data" (OuterVolumeSpecName: "config-data") pod "0afcea16-5821-4243-b580-89ca3cf9945b" (UID: "0afcea16-5821-4243-b580-89ca3cf9945b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.872848 4899 scope.go:117] "RemoveContainer" containerID="9dbda24391c61ea950c9586c692dc77f8ca4049c1f142fa13d3d1a96f429e135" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.874711 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.880937 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-scheduler-0"] Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.893173 4899 scope.go:117] "RemoveContainer" containerID="eb1803883bcb98218eb9ce1230f655c7e605d88a66932a62eb73a5cf6385934a" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.894096 4899 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0afcea16-5821-4243-b580-89ca3cf9945b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.894123 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.894134 4899 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0afcea16-5821-4243-b580-89ca3cf9945b-logs\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.894143 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp8pf\" (UniqueName: \"kubernetes.io/projected/0afcea16-5821-4243-b580-89ca3cf9945b-kube-api-access-cp8pf\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.894155 4899 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.894163 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0afcea16-5821-4243-b580-89ca3cf9945b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.910051 4899 scope.go:117] "RemoveContainer" containerID="a3171e9f1dac38d1f698ffcd71582ed7055ab8d2dcd3af89f473ae4c4a671c4c" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.923930 4899 scope.go:117] "RemoveContainer" containerID="eb1803883bcb98218eb9ce1230f655c7e605d88a66932a62eb73a5cf6385934a" Jan 26 21:18:10 crc kubenswrapper[4899]: E0126 21:18:10.924465 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb1803883bcb98218eb9ce1230f655c7e605d88a66932a62eb73a5cf6385934a\": container with ID starting with eb1803883bcb98218eb9ce1230f655c7e605d88a66932a62eb73a5cf6385934a not found: ID does not exist" containerID="eb1803883bcb98218eb9ce1230f655c7e605d88a66932a62eb73a5cf6385934a" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.924511 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb1803883bcb98218eb9ce1230f655c7e605d88a66932a62eb73a5cf6385934a"} err="failed to get container status \"eb1803883bcb98218eb9ce1230f655c7e605d88a66932a62eb73a5cf6385934a\": rpc error: code = NotFound desc = could not find container \"eb1803883bcb98218eb9ce1230f655c7e605d88a66932a62eb73a5cf6385934a\": container with ID starting with eb1803883bcb98218eb9ce1230f655c7e605d88a66932a62eb73a5cf6385934a not found: ID does not exist" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.924542 4899 scope.go:117] "RemoveContainer" containerID="a3171e9f1dac38d1f698ffcd71582ed7055ab8d2dcd3af89f473ae4c4a671c4c" Jan 26 21:18:10 crc kubenswrapper[4899]: E0126 21:18:10.924896 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3171e9f1dac38d1f698ffcd71582ed7055ab8d2dcd3af89f473ae4c4a671c4c\": container with ID starting with a3171e9f1dac38d1f698ffcd71582ed7055ab8d2dcd3af89f473ae4c4a671c4c not found: ID does not exist" containerID="a3171e9f1dac38d1f698ffcd71582ed7055ab8d2dcd3af89f473ae4c4a671c4c" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.924919 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3171e9f1dac38d1f698ffcd71582ed7055ab8d2dcd3af89f473ae4c4a671c4c"} err="failed to get container status \"a3171e9f1dac38d1f698ffcd71582ed7055ab8d2dcd3af89f473ae4c4a671c4c\": rpc error: code = NotFound desc = could not find container \"a3171e9f1dac38d1f698ffcd71582ed7055ab8d2dcd3af89f473ae4c4a671c4c\": container with ID starting with a3171e9f1dac38d1f698ffcd71582ed7055ab8d2dcd3af89f473ae4c4a671c4c not found: ID does not exist" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.939625 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39906e7d-94ba-4997-8e46-27d2f18888c9" path="/var/lib/kubelet/pods/39906e7d-94ba-4997-8e46-27d2f18888c9/volumes" Jan 26 21:18:10 crc kubenswrapper[4899]: I0126 21:18:10.940426 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9769e3-7fe3-4643-8ee9-c5557476b5cd" path="/var/lib/kubelet/pods/8e9769e3-7fe3-4643-8ee9-c5557476b5cd/volumes" Jan 26 21:18:11 crc kubenswrapper[4899]: I0126 21:18:11.144102 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:18:11 crc kubenswrapper[4899]: I0126 21:18:11.150250 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-api-0"] Jan 26 21:18:12 crc kubenswrapper[4899]: I0126 21:18:12.135124 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-db-create-hnspt"] Jan 26 21:18:12 crc kubenswrapper[4899]: I0126 21:18:12.140931 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila4115-account-delete-dw6wv"] Jan 26 21:18:12 crc kubenswrapper[4899]: I0126 21:18:12.146313 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-4115-account-create-update-kj9qp"] Jan 26 21:18:12 crc kubenswrapper[4899]: I0126 21:18:12.152984 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila4115-account-delete-dw6wv"] Jan 26 21:18:12 crc kubenswrapper[4899]: I0126 21:18:12.159355 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-4115-account-create-update-kj9qp"] Jan 26 21:18:12 crc kubenswrapper[4899]: I0126 21:18:12.165419 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-db-create-hnspt"] Jan 26 21:18:12 crc kubenswrapper[4899]: I0126 21:18:12.938550 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07ac1e30-81c7-4490-b3fc-31ca412bc46b" path="/var/lib/kubelet/pods/07ac1e30-81c7-4490-b3fc-31ca412bc46b/volumes" Jan 26 21:18:12 crc kubenswrapper[4899]: I0126 21:18:12.939278 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0afcea16-5821-4243-b580-89ca3cf9945b" path="/var/lib/kubelet/pods/0afcea16-5821-4243-b580-89ca3cf9945b/volumes" Jan 26 21:18:12 crc kubenswrapper[4899]: I0126 21:18:12.939872 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="225d74b4-6432-4f35-bd14-abb53a5d2c46" path="/var/lib/kubelet/pods/225d74b4-6432-4f35-bd14-abb53a5d2c46/volumes" Jan 26 21:18:12 crc kubenswrapper[4899]: I0126 21:18:12.940826 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8e08810-31e8-4b29-9835-fba465d64f68" path="/var/lib/kubelet/pods/c8e08810-31e8-4b29-9835-fba465d64f68/volumes" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.195688 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/keystone-db-sync-2dc4c"] Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.202245 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/keystone-bootstrap-6jzcd"] Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.211985 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/keystone-db-sync-2dc4c"] Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.224726 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/keystone-bootstrap-6jzcd"] Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.232552 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/keystone-59fbff8547-2xlqq"] Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.233067 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" podUID="9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0" containerName="keystone-api" containerID="cri-o://a4db775aa21655e096c95ca87b1f655c0fe7856652d8edbe1ccddbea0d6a8122" gracePeriod=30 Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.258765 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["manila-kuttl-tests/keystone57b7-account-delete-fpxnr"] Jan 26 21:18:16 crc kubenswrapper[4899]: E0126 21:18:16.259115 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b93a53e-a97b-4250-9524-332e5b65e329" containerName="manager" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259141 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b93a53e-a97b-4250-9524-332e5b65e329" containerName="manager" Jan 26 21:18:16 crc kubenswrapper[4899]: E0126 21:18:16.259153 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aeeeb25-090e-413f-b317-9b41061148c8" containerName="manila-share" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259161 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aeeeb25-090e-413f-b317-9b41061148c8" containerName="manila-share" Jan 26 21:18:16 crc kubenswrapper[4899]: E0126 21:18:16.259179 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39906e7d-94ba-4997-8e46-27d2f18888c9" containerName="registry-server" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259186 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="39906e7d-94ba-4997-8e46-27d2f18888c9" containerName="registry-server" Jan 26 21:18:16 crc kubenswrapper[4899]: E0126 21:18:16.259201 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9769e3-7fe3-4643-8ee9-c5557476b5cd" containerName="probe" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259208 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9769e3-7fe3-4643-8ee9-c5557476b5cd" containerName="probe" Jan 26 21:18:16 crc kubenswrapper[4899]: E0126 21:18:16.259220 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e08810-31e8-4b29-9835-fba465d64f68" containerName="mariadb-account-delete" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259230 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e08810-31e8-4b29-9835-fba465d64f68" containerName="mariadb-account-delete" Jan 26 21:18:16 crc kubenswrapper[4899]: E0126 21:18:16.259249 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aeeeb25-090e-413f-b317-9b41061148c8" containerName="probe" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259256 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aeeeb25-090e-413f-b317-9b41061148c8" containerName="probe" Jan 26 21:18:16 crc kubenswrapper[4899]: E0126 21:18:16.259268 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9769e3-7fe3-4643-8ee9-c5557476b5cd" containerName="manila-scheduler" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259277 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9769e3-7fe3-4643-8ee9-c5557476b5cd" containerName="manila-scheduler" Jan 26 21:18:16 crc kubenswrapper[4899]: E0126 21:18:16.259289 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0afcea16-5821-4243-b580-89ca3cf9945b" containerName="manila-api" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259296 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="0afcea16-5821-4243-b580-89ca3cf9945b" containerName="manila-api" Jan 26 21:18:16 crc kubenswrapper[4899]: E0126 21:18:16.259308 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0afcea16-5821-4243-b580-89ca3cf9945b" containerName="manila-api-log" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259315 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="0afcea16-5821-4243-b580-89ca3cf9945b" containerName="manila-api-log" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259447 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e08810-31e8-4b29-9835-fba465d64f68" containerName="mariadb-account-delete" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259464 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="39906e7d-94ba-4997-8e46-27d2f18888c9" containerName="registry-server" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259475 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9769e3-7fe3-4643-8ee9-c5557476b5cd" containerName="manila-scheduler" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259484 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9769e3-7fe3-4643-8ee9-c5557476b5cd" containerName="probe" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259491 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="0afcea16-5821-4243-b580-89ca3cf9945b" containerName="manila-api-log" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259499 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="0afcea16-5821-4243-b580-89ca3cf9945b" containerName="manila-api" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259507 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aeeeb25-090e-413f-b317-9b41061148c8" containerName="manila-share" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259520 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b93a53e-a97b-4250-9524-332e5b65e329" containerName="manager" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.259528 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aeeeb25-090e-413f-b317-9b41061148c8" containerName="probe" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.260090 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone57b7-account-delete-fpxnr" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.270081 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/keystone57b7-account-delete-fpxnr"] Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.282422 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cqqc\" (UniqueName: \"kubernetes.io/projected/cfc45ce4-b190-43f2-ad5a-d738caf6f033-kube-api-access-5cqqc\") pod \"keystone57b7-account-delete-fpxnr\" (UID: \"cfc45ce4-b190-43f2-ad5a-d738caf6f033\") " pod="manila-kuttl-tests/keystone57b7-account-delete-fpxnr" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.282505 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfc45ce4-b190-43f2-ad5a-d738caf6f033-operator-scripts\") pod \"keystone57b7-account-delete-fpxnr\" (UID: \"cfc45ce4-b190-43f2-ad5a-d738caf6f033\") " pod="manila-kuttl-tests/keystone57b7-account-delete-fpxnr" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.383498 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cqqc\" (UniqueName: \"kubernetes.io/projected/cfc45ce4-b190-43f2-ad5a-d738caf6f033-kube-api-access-5cqqc\") pod \"keystone57b7-account-delete-fpxnr\" (UID: \"cfc45ce4-b190-43f2-ad5a-d738caf6f033\") " pod="manila-kuttl-tests/keystone57b7-account-delete-fpxnr" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.383568 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfc45ce4-b190-43f2-ad5a-d738caf6f033-operator-scripts\") pod \"keystone57b7-account-delete-fpxnr\" (UID: \"cfc45ce4-b190-43f2-ad5a-d738caf6f033\") " pod="manila-kuttl-tests/keystone57b7-account-delete-fpxnr" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.384821 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfc45ce4-b190-43f2-ad5a-d738caf6f033-operator-scripts\") pod \"keystone57b7-account-delete-fpxnr\" (UID: \"cfc45ce4-b190-43f2-ad5a-d738caf6f033\") " pod="manila-kuttl-tests/keystone57b7-account-delete-fpxnr" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.403646 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cqqc\" (UniqueName: \"kubernetes.io/projected/cfc45ce4-b190-43f2-ad5a-d738caf6f033-kube-api-access-5cqqc\") pod \"keystone57b7-account-delete-fpxnr\" (UID: \"cfc45ce4-b190-43f2-ad5a-d738caf6f033\") " pod="manila-kuttl-tests/keystone57b7-account-delete-fpxnr" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.575206 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone57b7-account-delete-fpxnr" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.944427 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ab177f9-aee8-4921-b60d-c085a99964f4" path="/var/lib/kubelet/pods/7ab177f9-aee8-4921-b60d-c085a99964f4/volumes" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.945604 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8596bee1-b6cc-499d-b944-7e6732399d9b" path="/var/lib/kubelet/pods/8596bee1-b6cc-499d-b944-7e6732399d9b/volumes" Jan 26 21:18:16 crc kubenswrapper[4899]: I0126 21:18:16.997436 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/keystone57b7-account-delete-fpxnr"] Jan 26 21:18:17 crc kubenswrapper[4899]: I0126 21:18:17.885420 4899 generic.go:334] "Generic (PLEG): container finished" podID="cfc45ce4-b190-43f2-ad5a-d738caf6f033" containerID="d60060cb472e5ff0e4493f6cc8c54c7547ef9044e0027d22ea81cdc5847425a4" exitCode=0 Jan 26 21:18:17 crc kubenswrapper[4899]: I0126 21:18:17.885617 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone57b7-account-delete-fpxnr" event={"ID":"cfc45ce4-b190-43f2-ad5a-d738caf6f033","Type":"ContainerDied","Data":"d60060cb472e5ff0e4493f6cc8c54c7547ef9044e0027d22ea81cdc5847425a4"} Jan 26 21:18:17 crc kubenswrapper[4899]: I0126 21:18:17.885750 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone57b7-account-delete-fpxnr" event={"ID":"cfc45ce4-b190-43f2-ad5a-d738caf6f033","Type":"ContainerStarted","Data":"c50f614291c7c7a9e27b32bef6e5e36c053bc025545714a7177c50a84386f8a1"} Jan 26 21:18:18 crc kubenswrapper[4899]: I0126 21:18:18.039943 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/root-account-create-update-vgsv6"] Jan 26 21:18:18 crc kubenswrapper[4899]: I0126 21:18:18.048501 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/root-account-create-update-vgsv6"] Jan 26 21:18:18 crc kubenswrapper[4899]: I0126 21:18:18.076974 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/openstack-galera-2"] Jan 26 21:18:18 crc kubenswrapper[4899]: I0126 21:18:18.085987 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/openstack-galera-0"] Jan 26 21:18:18 crc kubenswrapper[4899]: I0126 21:18:18.099477 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/openstack-galera-1"] Jan 26 21:18:18 crc kubenswrapper[4899]: I0126 21:18:18.225363 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/openstack-galera-2" podUID="9d25306a-7534-45dc-a752-efdb1bb3c2f8" containerName="galera" containerID="cri-o://eff235495821bd9d63f429bbdc2eb73fe6c6be35c98946da1487dc70ad4f6b43" gracePeriod=30 Jan 26 21:18:18 crc kubenswrapper[4899]: I0126 21:18:18.707385 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/memcached-0"] Jan 26 21:18:18 crc kubenswrapper[4899]: I0126 21:18:18.707639 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/memcached-0" podUID="ea09b8ff-8868-45dc-92e5-bdee96d13107" containerName="memcached" containerID="cri-o://ecc13133795a9325f993bb309727c2c044412f3de64419facdc72dbd7f2cd736" gracePeriod=30 Jan 26 21:18:18 crc kubenswrapper[4899]: I0126 21:18:18.897695 4899 generic.go:334] "Generic (PLEG): container finished" podID="9d25306a-7534-45dc-a752-efdb1bb3c2f8" containerID="eff235495821bd9d63f429bbdc2eb73fe6c6be35c98946da1487dc70ad4f6b43" exitCode=0 Jan 26 21:18:18 crc kubenswrapper[4899]: I0126 21:18:18.897843 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-2" event={"ID":"9d25306a-7534-45dc-a752-efdb1bb3c2f8","Type":"ContainerDied","Data":"eff235495821bd9d63f429bbdc2eb73fe6c6be35c98946da1487dc70ad4f6b43"} Jan 26 21:18:18 crc kubenswrapper[4899]: I0126 21:18:18.940342 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1bb4284-a142-421b-b41c-46c3b31995fa" path="/var/lib/kubelet/pods/e1bb4284-a142-421b-b41c-46c3b31995fa/volumes" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.137109 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["manila-kuttl-tests/rabbitmq-server-0"] Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.192201 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.312077 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone57b7-account-delete-fpxnr" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.325262 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d25306a-7534-45dc-a752-efdb1bb3c2f8-config-data-generated\") pod \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.325597 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfjwm\" (UniqueName: \"kubernetes.io/projected/9d25306a-7534-45dc-a752-efdb1bb3c2f8-kube-api-access-jfjwm\") pod \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.325756 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.325946 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-kolla-config\") pod \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.326192 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-config-data-default\") pod \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.326335 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-operator-scripts\") pod \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\" (UID: \"9d25306a-7534-45dc-a752-efdb1bb3c2f8\") " Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.326553 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "9d25306a-7534-45dc-a752-efdb1bb3c2f8" (UID: "9d25306a-7534-45dc-a752-efdb1bb3c2f8"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.326679 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "9d25306a-7534-45dc-a752-efdb1bb3c2f8" (UID: "9d25306a-7534-45dc-a752-efdb1bb3c2f8"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.326833 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d25306a-7534-45dc-a752-efdb1bb3c2f8-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "9d25306a-7534-45dc-a752-efdb1bb3c2f8" (UID: "9d25306a-7534-45dc-a752-efdb1bb3c2f8"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.327454 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d25306a-7534-45dc-a752-efdb1bb3c2f8" (UID: "9d25306a-7534-45dc-a752-efdb1bb3c2f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.327905 4899 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.327959 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.327971 4899 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d25306a-7534-45dc-a752-efdb1bb3c2f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.327981 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d25306a-7534-45dc-a752-efdb1bb3c2f8-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.330983 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d25306a-7534-45dc-a752-efdb1bb3c2f8-kube-api-access-jfjwm" (OuterVolumeSpecName: "kube-api-access-jfjwm") pod "9d25306a-7534-45dc-a752-efdb1bb3c2f8" (UID: "9d25306a-7534-45dc-a752-efdb1bb3c2f8"). InnerVolumeSpecName "kube-api-access-jfjwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.336983 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "9d25306a-7534-45dc-a752-efdb1bb3c2f8" (UID: "9d25306a-7534-45dc-a752-efdb1bb3c2f8"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.429527 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfc45ce4-b190-43f2-ad5a-d738caf6f033-operator-scripts\") pod \"cfc45ce4-b190-43f2-ad5a-d738caf6f033\" (UID: \"cfc45ce4-b190-43f2-ad5a-d738caf6f033\") " Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.429625 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cqqc\" (UniqueName: \"kubernetes.io/projected/cfc45ce4-b190-43f2-ad5a-d738caf6f033-kube-api-access-5cqqc\") pod \"cfc45ce4-b190-43f2-ad5a-d738caf6f033\" (UID: \"cfc45ce4-b190-43f2-ad5a-d738caf6f033\") " Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.429999 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfc45ce4-b190-43f2-ad5a-d738caf6f033-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cfc45ce4-b190-43f2-ad5a-d738caf6f033" (UID: "cfc45ce4-b190-43f2-ad5a-d738caf6f033"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.430039 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfjwm\" (UniqueName: \"kubernetes.io/projected/9d25306a-7534-45dc-a752-efdb1bb3c2f8-kube-api-access-jfjwm\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.430078 4899 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.432350 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc45ce4-b190-43f2-ad5a-d738caf6f033-kube-api-access-5cqqc" (OuterVolumeSpecName: "kube-api-access-5cqqc") pod "cfc45ce4-b190-43f2-ad5a-d738caf6f033" (UID: "cfc45ce4-b190-43f2-ad5a-d738caf6f033"). InnerVolumeSpecName "kube-api-access-5cqqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.443676 4899 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.531397 4899 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfc45ce4-b190-43f2-ad5a-d738caf6f033-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.531427 4899 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.531438 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cqqc\" (UniqueName: \"kubernetes.io/projected/cfc45ce4-b190-43f2-ad5a-d738caf6f033-kube-api-access-5cqqc\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.551607 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/rabbitmq-server-0"] Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.701914 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.835491 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-credential-keys\") pod \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.835861 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-fernet-keys\") pod \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.835974 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qz65\" (UniqueName: \"kubernetes.io/projected/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-kube-api-access-4qz65\") pod \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.835998 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-config-data\") pod \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.836044 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-scripts\") pod \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\" (UID: \"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0\") " Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.838779 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0" (UID: "9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.838817 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-scripts" (OuterVolumeSpecName: "scripts") pod "9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0" (UID: "9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.839095 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0" (UID: "9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.847520 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-kube-api-access-4qz65" (OuterVolumeSpecName: "kube-api-access-4qz65") pod "9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0" (UID: "9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0"). InnerVolumeSpecName "kube-api-access-4qz65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.854600 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-config-data" (OuterVolumeSpecName: "config-data") pod "9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0" (UID: "9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.907490 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-2" event={"ID":"9d25306a-7534-45dc-a752-efdb1bb3c2f8","Type":"ContainerDied","Data":"7de1b1491fa7bc5de8f94f26207c77597c52e54b7cf30c65de794a6ef163db52"} Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.907537 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/openstack-galera-2" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.907574 4899 scope.go:117] "RemoveContainer" containerID="eff235495821bd9d63f429bbdc2eb73fe6c6be35c98946da1487dc70ad4f6b43" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.913090 4899 generic.go:334] "Generic (PLEG): container finished" podID="ea09b8ff-8868-45dc-92e5-bdee96d13107" containerID="ecc13133795a9325f993bb309727c2c044412f3de64419facdc72dbd7f2cd736" exitCode=0 Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.913153 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/memcached-0" event={"ID":"ea09b8ff-8868-45dc-92e5-bdee96d13107","Type":"ContainerDied","Data":"ecc13133795a9325f993bb309727c2c044412f3de64419facdc72dbd7f2cd736"} Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.914497 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone57b7-account-delete-fpxnr" event={"ID":"cfc45ce4-b190-43f2-ad5a-d738caf6f033","Type":"ContainerDied","Data":"c50f614291c7c7a9e27b32bef6e5e36c053bc025545714a7177c50a84386f8a1"} Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.914519 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c50f614291c7c7a9e27b32bef6e5e36c053bc025545714a7177c50a84386f8a1" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.914574 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone57b7-account-delete-fpxnr" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.917495 4899 generic.go:334] "Generic (PLEG): container finished" podID="9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0" containerID="a4db775aa21655e096c95ca87b1f655c0fe7856652d8edbe1ccddbea0d6a8122" exitCode=0 Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.917641 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" event={"ID":"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0","Type":"ContainerDied","Data":"a4db775aa21655e096c95ca87b1f655c0fe7856652d8edbe1ccddbea0d6a8122"} Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.917669 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" event={"ID":"9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0","Type":"ContainerDied","Data":"b5bd7731dfa36bf9e861ca1d74a4d004d9b18a847cee638d8746195ba7c0d1a5"} Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.927926 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/keystone-59fbff8547-2xlqq" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.937672 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qz65\" (UniqueName: \"kubernetes.io/projected/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-kube-api-access-4qz65\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.937713 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.937727 4899 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.937739 4899 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.937751 4899 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.956298 4899 scope.go:117] "RemoveContainer" containerID="4d947da3de5046ba1caeeaad9180a7340663554ea8080e04e480cb312536cd4f" Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.970990 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/openstack-galera-2"] Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.975649 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/openstack-galera-2"] Jan 26 21:18:19 crc kubenswrapper[4899]: I0126 21:18:19.997627 4899 scope.go:117] "RemoveContainer" containerID="a4db775aa21655e096c95ca87b1f655c0fe7856652d8edbe1ccddbea0d6a8122" Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.006086 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/rabbitmq-server-0" podUID="4b49daa9-f343-4c81-88d5-ded2e08582aa" containerName="rabbitmq" containerID="cri-o://025b7b31afb175c9f65e2849f79b843a111ca2f27eb2e8031a9db8ccc33df643" gracePeriod=604800 Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.006234 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/keystone-59fbff8547-2xlqq"] Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.010164 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/keystone-59fbff8547-2xlqq"] Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.021193 4899 scope.go:117] "RemoveContainer" containerID="a4db775aa21655e096c95ca87b1f655c0fe7856652d8edbe1ccddbea0d6a8122" Jan 26 21:18:20 crc kubenswrapper[4899]: E0126 21:18:20.021665 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4db775aa21655e096c95ca87b1f655c0fe7856652d8edbe1ccddbea0d6a8122\": container with ID starting with a4db775aa21655e096c95ca87b1f655c0fe7856652d8edbe1ccddbea0d6a8122 not found: ID does not exist" containerID="a4db775aa21655e096c95ca87b1f655c0fe7856652d8edbe1ccddbea0d6a8122" Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.021702 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4db775aa21655e096c95ca87b1f655c0fe7856652d8edbe1ccddbea0d6a8122"} err="failed to get container status \"a4db775aa21655e096c95ca87b1f655c0fe7856652d8edbe1ccddbea0d6a8122\": rpc error: code = NotFound desc = could not find container \"a4db775aa21655e096c95ca87b1f655c0fe7856652d8edbe1ccddbea0d6a8122\": container with ID starting with a4db775aa21655e096c95ca87b1f655c0fe7856652d8edbe1ccddbea0d6a8122 not found: ID does not exist" Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.125122 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/memcached-0" Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.208344 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/ceph"] Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.208576 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/ceph" podUID="951664be-c618-4a13-8265-32cf5a4d7cf1" containerName="ceph" containerID="cri-o://f2260c1878f0f80c6406c66bf8626f4036e6bab59943aaea1d5243720753b490" gracePeriod=30 Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.241099 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ea09b8ff-8868-45dc-92e5-bdee96d13107-kolla-config\") pod \"ea09b8ff-8868-45dc-92e5-bdee96d13107\" (UID: \"ea09b8ff-8868-45dc-92e5-bdee96d13107\") " Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.241168 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea09b8ff-8868-45dc-92e5-bdee96d13107-config-data\") pod \"ea09b8ff-8868-45dc-92e5-bdee96d13107\" (UID: \"ea09b8ff-8868-45dc-92e5-bdee96d13107\") " Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.241203 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msltd\" (UniqueName: \"kubernetes.io/projected/ea09b8ff-8868-45dc-92e5-bdee96d13107-kube-api-access-msltd\") pod \"ea09b8ff-8868-45dc-92e5-bdee96d13107\" (UID: \"ea09b8ff-8868-45dc-92e5-bdee96d13107\") " Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.242315 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea09b8ff-8868-45dc-92e5-bdee96d13107-config-data" (OuterVolumeSpecName: "config-data") pod "ea09b8ff-8868-45dc-92e5-bdee96d13107" (UID: "ea09b8ff-8868-45dc-92e5-bdee96d13107"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.242340 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea09b8ff-8868-45dc-92e5-bdee96d13107-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "ea09b8ff-8868-45dc-92e5-bdee96d13107" (UID: "ea09b8ff-8868-45dc-92e5-bdee96d13107"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.246672 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea09b8ff-8868-45dc-92e5-bdee96d13107-kube-api-access-msltd" (OuterVolumeSpecName: "kube-api-access-msltd") pod "ea09b8ff-8868-45dc-92e5-bdee96d13107" (UID: "ea09b8ff-8868-45dc-92e5-bdee96d13107"). InnerVolumeSpecName "kube-api-access-msltd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.256722 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/openstack-galera-1" podUID="93293cee-6c86-4865-8a19-b43659a851f3" containerName="galera" containerID="cri-o://21e74311163effb47fd7a930799c5f9f926518f8e1d054fdebfd10c7b53a179b" gracePeriod=28 Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.343218 4899 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ea09b8ff-8868-45dc-92e5-bdee96d13107-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.343828 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea09b8ff-8868-45dc-92e5-bdee96d13107-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.343920 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msltd\" (UniqueName: \"kubernetes.io/projected/ea09b8ff-8868-45dc-92e5-bdee96d13107-kube-api-access-msltd\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.927494 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/memcached-0" event={"ID":"ea09b8ff-8868-45dc-92e5-bdee96d13107","Type":"ContainerDied","Data":"5c2dfd519c081688820e0660e593165baf67c6997e536c056661613f15851205"} Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.927556 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/memcached-0" Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.927565 4899 scope.go:117] "RemoveContainer" containerID="ecc13133795a9325f993bb309727c2c044412f3de64419facdc72dbd7f2cd736" Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.939121 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0" path="/var/lib/kubelet/pods/9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0/volumes" Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.940234 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d25306a-7534-45dc-a752-efdb1bb3c2f8" path="/var/lib/kubelet/pods/9d25306a-7534-45dc-a752-efdb1bb3c2f8/volumes" Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.968312 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/memcached-0"] Jan 26 21:18:20 crc kubenswrapper[4899]: I0126 21:18:20.973443 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/memcached-0"] Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.273408 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/keystone-db-create-pmsq9"] Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.277593 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/keystone-db-create-pmsq9"] Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.288443 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt"] Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.295723 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/keystone57b7-account-delete-fpxnr"] Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.303674 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/keystone-57b7-account-create-update-sgpdt"] Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.308763 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/keystone57b7-account-delete-fpxnr"] Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.538447 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.664314 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b49daa9-f343-4c81-88d5-ded2e08582aa-erlang-cookie-secret\") pod \"4b49daa9-f343-4c81-88d5-ded2e08582aa\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.664376 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b49daa9-f343-4c81-88d5-ded2e08582aa-pod-info\") pod \"4b49daa9-f343-4c81-88d5-ded2e08582aa\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.664472 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b49daa9-f343-4c81-88d5-ded2e08582aa-plugins-conf\") pod \"4b49daa9-f343-4c81-88d5-ded2e08582aa\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.664519 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-plugins\") pod \"4b49daa9-f343-4c81-88d5-ded2e08582aa\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.664547 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-confd\") pod \"4b49daa9-f343-4c81-88d5-ded2e08582aa\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.664581 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-erlang-cookie\") pod \"4b49daa9-f343-4c81-88d5-ded2e08582aa\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.664844 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2\") pod \"4b49daa9-f343-4c81-88d5-ded2e08582aa\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.664896 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwqlv\" (UniqueName: \"kubernetes.io/projected/4b49daa9-f343-4c81-88d5-ded2e08582aa-kube-api-access-jwqlv\") pod \"4b49daa9-f343-4c81-88d5-ded2e08582aa\" (UID: \"4b49daa9-f343-4c81-88d5-ded2e08582aa\") " Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.665321 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4b49daa9-f343-4c81-88d5-ded2e08582aa" (UID: "4b49daa9-f343-4c81-88d5-ded2e08582aa"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.667761 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b49daa9-f343-4c81-88d5-ded2e08582aa-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4b49daa9-f343-4c81-88d5-ded2e08582aa" (UID: "4b49daa9-f343-4c81-88d5-ded2e08582aa"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.666982 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4b49daa9-f343-4c81-88d5-ded2e08582aa" (UID: "4b49daa9-f343-4c81-88d5-ded2e08582aa"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.670591 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b49daa9-f343-4c81-88d5-ded2e08582aa-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4b49daa9-f343-4c81-88d5-ded2e08582aa" (UID: "4b49daa9-f343-4c81-88d5-ded2e08582aa"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.672038 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b49daa9-f343-4c81-88d5-ded2e08582aa-kube-api-access-jwqlv" (OuterVolumeSpecName: "kube-api-access-jwqlv") pod "4b49daa9-f343-4c81-88d5-ded2e08582aa" (UID: "4b49daa9-f343-4c81-88d5-ded2e08582aa"). InnerVolumeSpecName "kube-api-access-jwqlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.672166 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4b49daa9-f343-4c81-88d5-ded2e08582aa-pod-info" (OuterVolumeSpecName: "pod-info") pod "4b49daa9-f343-4c81-88d5-ded2e08582aa" (UID: "4b49daa9-f343-4c81-88d5-ded2e08582aa"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.686314 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2" (OuterVolumeSpecName: "persistence") pod "4b49daa9-f343-4c81-88d5-ded2e08582aa" (UID: "4b49daa9-f343-4c81-88d5-ded2e08582aa"). InnerVolumeSpecName "pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.733204 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4b49daa9-f343-4c81-88d5-ded2e08582aa" (UID: "4b49daa9-f343-4c81-88d5-ded2e08582aa"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.766163 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwqlv\" (UniqueName: \"kubernetes.io/projected/4b49daa9-f343-4c81-88d5-ded2e08582aa-kube-api-access-jwqlv\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.766201 4899 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b49daa9-f343-4c81-88d5-ded2e08582aa-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.766214 4899 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b49daa9-f343-4c81-88d5-ded2e08582aa-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.766225 4899 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b49daa9-f343-4c81-88d5-ded2e08582aa-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.766235 4899 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.766246 4899 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.766256 4899 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b49daa9-f343-4c81-88d5-ded2e08582aa-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.766302 4899 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2\") on node \"crc\" " Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.782096 4899 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.782264 4899 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2") on node "crc" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.867633 4899 reconciler_common.go:293] "Volume detached for volume \"pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5386375d-4d3d-4ca5-814c-2118bc693ca2\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.936879 4899 generic.go:334] "Generic (PLEG): container finished" podID="4b49daa9-f343-4c81-88d5-ded2e08582aa" containerID="025b7b31afb175c9f65e2849f79b843a111ca2f27eb2e8031a9db8ccc33df643" exitCode=0 Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.936965 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/rabbitmq-server-0" event={"ID":"4b49daa9-f343-4c81-88d5-ded2e08582aa","Type":"ContainerDied","Data":"025b7b31afb175c9f65e2849f79b843a111ca2f27eb2e8031a9db8ccc33df643"} Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.937033 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/rabbitmq-server-0" event={"ID":"4b49daa9-f343-4c81-88d5-ded2e08582aa","Type":"ContainerDied","Data":"4450cbcddea0853234b20d9596d3e9c57a9acc03033c04a9e100511d79ae9584"} Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.937059 4899 scope.go:117] "RemoveContainer" containerID="025b7b31afb175c9f65e2849f79b843a111ca2f27eb2e8031a9db8ccc33df643" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.937069 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/rabbitmq-server-0" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.968275 4899 scope.go:117] "RemoveContainer" containerID="a48469d1340f1c0375b06f14a6cd46d0511d8468c0bf91879d9ed513e1852c15" Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.969961 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/rabbitmq-server-0"] Jan 26 21:18:21 crc kubenswrapper[4899]: I0126 21:18:21.978390 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/rabbitmq-server-0"] Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:21.998982 4899 scope.go:117] "RemoveContainer" containerID="025b7b31afb175c9f65e2849f79b843a111ca2f27eb2e8031a9db8ccc33df643" Jan 26 21:18:22 crc kubenswrapper[4899]: E0126 21:18:21.999508 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"025b7b31afb175c9f65e2849f79b843a111ca2f27eb2e8031a9db8ccc33df643\": container with ID starting with 025b7b31afb175c9f65e2849f79b843a111ca2f27eb2e8031a9db8ccc33df643 not found: ID does not exist" containerID="025b7b31afb175c9f65e2849f79b843a111ca2f27eb2e8031a9db8ccc33df643" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:21.999532 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"025b7b31afb175c9f65e2849f79b843a111ca2f27eb2e8031a9db8ccc33df643"} err="failed to get container status \"025b7b31afb175c9f65e2849f79b843a111ca2f27eb2e8031a9db8ccc33df643\": rpc error: code = NotFound desc = could not find container \"025b7b31afb175c9f65e2849f79b843a111ca2f27eb2e8031a9db8ccc33df643\": container with ID starting with 025b7b31afb175c9f65e2849f79b843a111ca2f27eb2e8031a9db8ccc33df643 not found: ID does not exist" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:21.999553 4899 scope.go:117] "RemoveContainer" containerID="a48469d1340f1c0375b06f14a6cd46d0511d8468c0bf91879d9ed513e1852c15" Jan 26 21:18:22 crc kubenswrapper[4899]: E0126 21:18:21.999864 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a48469d1340f1c0375b06f14a6cd46d0511d8468c0bf91879d9ed513e1852c15\": container with ID starting with a48469d1340f1c0375b06f14a6cd46d0511d8468c0bf91879d9ed513e1852c15 not found: ID does not exist" containerID="a48469d1340f1c0375b06f14a6cd46d0511d8468c0bf91879d9ed513e1852c15" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:21.999885 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a48469d1340f1c0375b06f14a6cd46d0511d8468c0bf91879d9ed513e1852c15"} err="failed to get container status \"a48469d1340f1c0375b06f14a6cd46d0511d8468c0bf91879d9ed513e1852c15\": rpc error: code = NotFound desc = could not find container \"a48469d1340f1c0375b06f14a6cd46d0511d8468c0bf91879d9ed513e1852c15\": container with ID starting with a48469d1340f1c0375b06f14a6cd46d0511d8468c0bf91879d9ed513e1852c15 not found: ID does not exist" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.212044 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.288437 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="manila-kuttl-tests/openstack-galera-0" podUID="e1149d0e-e93d-496a-9022-51fa77168394" containerName="galera" containerID="cri-o://78def6462e135e1c51cf7586dd668fc67c6c03b3a74bd3086155f1b882d87166" gracePeriod=26 Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.375437 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-operator-scripts\") pod \"93293cee-6c86-4865-8a19-b43659a851f3\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.375512 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-kolla-config\") pod \"93293cee-6c86-4865-8a19-b43659a851f3\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.375547 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/93293cee-6c86-4865-8a19-b43659a851f3-config-data-generated\") pod \"93293cee-6c86-4865-8a19-b43659a851f3\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.375667 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhr68\" (UniqueName: \"kubernetes.io/projected/93293cee-6c86-4865-8a19-b43659a851f3-kube-api-access-mhr68\") pod \"93293cee-6c86-4865-8a19-b43659a851f3\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.375715 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-config-data-default\") pod \"93293cee-6c86-4865-8a19-b43659a851f3\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.375733 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"93293cee-6c86-4865-8a19-b43659a851f3\" (UID: \"93293cee-6c86-4865-8a19-b43659a851f3\") " Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.376138 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93293cee-6c86-4865-8a19-b43659a851f3-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "93293cee-6c86-4865-8a19-b43659a851f3" (UID: "93293cee-6c86-4865-8a19-b43659a851f3"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.376158 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "93293cee-6c86-4865-8a19-b43659a851f3" (UID: "93293cee-6c86-4865-8a19-b43659a851f3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.376279 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93293cee-6c86-4865-8a19-b43659a851f3" (UID: "93293cee-6c86-4865-8a19-b43659a851f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.376387 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "93293cee-6c86-4865-8a19-b43659a851f3" (UID: "93293cee-6c86-4865-8a19-b43659a851f3"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.380379 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93293cee-6c86-4865-8a19-b43659a851f3-kube-api-access-mhr68" (OuterVolumeSpecName: "kube-api-access-mhr68") pod "93293cee-6c86-4865-8a19-b43659a851f3" (UID: "93293cee-6c86-4865-8a19-b43659a851f3"). InnerVolumeSpecName "kube-api-access-mhr68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.384811 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "93293cee-6c86-4865-8a19-b43659a851f3" (UID: "93293cee-6c86-4865-8a19-b43659a851f3"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.477527 4899 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.477583 4899 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.477593 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/93293cee-6c86-4865-8a19-b43659a851f3-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.477604 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhr68\" (UniqueName: \"kubernetes.io/projected/93293cee-6c86-4865-8a19-b43659a851f3-kube-api-access-mhr68\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.477614 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/93293cee-6c86-4865-8a19-b43659a851f3-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.477644 4899 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.489754 4899 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.579291 4899 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.939566 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b49daa9-f343-4c81-88d5-ded2e08582aa" path="/var/lib/kubelet/pods/4b49daa9-f343-4c81-88d5-ded2e08582aa/volumes" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.940698 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="553baa7a-de49-4c87-9cb2-a57838ac671a" path="/var/lib/kubelet/pods/553baa7a-de49-4c87-9cb2-a57838ac671a/volumes" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.941349 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ddc2cab-c784-48b0-9ac8-202189823ab2" path="/var/lib/kubelet/pods/7ddc2cab-c784-48b0-9ac8-202189823ab2/volumes" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.942499 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfc45ce4-b190-43f2-ad5a-d738caf6f033" path="/var/lib/kubelet/pods/cfc45ce4-b190-43f2-ad5a-d738caf6f033/volumes" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.943125 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea09b8ff-8868-45dc-92e5-bdee96d13107" path="/var/lib/kubelet/pods/ea09b8ff-8868-45dc-92e5-bdee96d13107/volumes" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.954531 4899 generic.go:334] "Generic (PLEG): container finished" podID="e1149d0e-e93d-496a-9022-51fa77168394" containerID="78def6462e135e1c51cf7586dd668fc67c6c03b3a74bd3086155f1b882d87166" exitCode=0 Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.954673 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-0" event={"ID":"e1149d0e-e93d-496a-9022-51fa77168394","Type":"ContainerDied","Data":"78def6462e135e1c51cf7586dd668fc67c6c03b3a74bd3086155f1b882d87166"} Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.957394 4899 generic.go:334] "Generic (PLEG): container finished" podID="93293cee-6c86-4865-8a19-b43659a851f3" containerID="21e74311163effb47fd7a930799c5f9f926518f8e1d054fdebfd10c7b53a179b" exitCode=0 Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.957420 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/openstack-galera-1" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.957470 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-1" event={"ID":"93293cee-6c86-4865-8a19-b43659a851f3","Type":"ContainerDied","Data":"21e74311163effb47fd7a930799c5f9f926518f8e1d054fdebfd10c7b53a179b"} Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.957531 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-1" event={"ID":"93293cee-6c86-4865-8a19-b43659a851f3","Type":"ContainerDied","Data":"692d96848566cec16080265b0ffe9f9e770aca53365f64b0af074ba6f31385ec"} Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.957557 4899 scope.go:117] "RemoveContainer" containerID="21e74311163effb47fd7a930799c5f9f926518f8e1d054fdebfd10c7b53a179b" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.978370 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/openstack-galera-1"] Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.980983 4899 scope.go:117] "RemoveContainer" containerID="bf3d58715c052ac652ddac8306ea3709b4434b9da390c2acdc009ba362c74e67" Jan 26 21:18:22 crc kubenswrapper[4899]: I0126 21:18:22.984693 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/openstack-galera-1"] Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.000440 4899 scope.go:117] "RemoveContainer" containerID="21e74311163effb47fd7a930799c5f9f926518f8e1d054fdebfd10c7b53a179b" Jan 26 21:18:23 crc kubenswrapper[4899]: E0126 21:18:23.000917 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21e74311163effb47fd7a930799c5f9f926518f8e1d054fdebfd10c7b53a179b\": container with ID starting with 21e74311163effb47fd7a930799c5f9f926518f8e1d054fdebfd10c7b53a179b not found: ID does not exist" containerID="21e74311163effb47fd7a930799c5f9f926518f8e1d054fdebfd10c7b53a179b" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.001001 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21e74311163effb47fd7a930799c5f9f926518f8e1d054fdebfd10c7b53a179b"} err="failed to get container status \"21e74311163effb47fd7a930799c5f9f926518f8e1d054fdebfd10c7b53a179b\": rpc error: code = NotFound desc = could not find container \"21e74311163effb47fd7a930799c5f9f926518f8e1d054fdebfd10c7b53a179b\": container with ID starting with 21e74311163effb47fd7a930799c5f9f926518f8e1d054fdebfd10c7b53a179b not found: ID does not exist" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.001959 4899 scope.go:117] "RemoveContainer" containerID="bf3d58715c052ac652ddac8306ea3709b4434b9da390c2acdc009ba362c74e67" Jan 26 21:18:23 crc kubenswrapper[4899]: E0126 21:18:23.002326 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf3d58715c052ac652ddac8306ea3709b4434b9da390c2acdc009ba362c74e67\": container with ID starting with bf3d58715c052ac652ddac8306ea3709b4434b9da390c2acdc009ba362c74e67 not found: ID does not exist" containerID="bf3d58715c052ac652ddac8306ea3709b4434b9da390c2acdc009ba362c74e67" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.002384 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf3d58715c052ac652ddac8306ea3709b4434b9da390c2acdc009ba362c74e67"} err="failed to get container status \"bf3d58715c052ac652ddac8306ea3709b4434b9da390c2acdc009ba362c74e67\": rpc error: code = NotFound desc = could not find container \"bf3d58715c052ac652ddac8306ea3709b4434b9da390c2acdc009ba362c74e67\": container with ID starting with bf3d58715c052ac652ddac8306ea3709b4434b9da390c2acdc009ba362c74e67 not found: ID does not exist" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.023478 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.184216 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"e1149d0e-e93d-496a-9022-51fa77168394\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.184287 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-kolla-config\") pod \"e1149d0e-e93d-496a-9022-51fa77168394\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.184378 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-operator-scripts\") pod \"e1149d0e-e93d-496a-9022-51fa77168394\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.184405 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-config-data-default\") pod \"e1149d0e-e93d-496a-9022-51fa77168394\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.184453 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr6rg\" (UniqueName: \"kubernetes.io/projected/e1149d0e-e93d-496a-9022-51fa77168394-kube-api-access-lr6rg\") pod \"e1149d0e-e93d-496a-9022-51fa77168394\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.184508 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e1149d0e-e93d-496a-9022-51fa77168394-config-data-generated\") pod \"e1149d0e-e93d-496a-9022-51fa77168394\" (UID: \"e1149d0e-e93d-496a-9022-51fa77168394\") " Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.185080 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1149d0e-e93d-496a-9022-51fa77168394-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "e1149d0e-e93d-496a-9022-51fa77168394" (UID: "e1149d0e-e93d-496a-9022-51fa77168394"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.185213 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "e1149d0e-e93d-496a-9022-51fa77168394" (UID: "e1149d0e-e93d-496a-9022-51fa77168394"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.185281 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "e1149d0e-e93d-496a-9022-51fa77168394" (UID: "e1149d0e-e93d-496a-9022-51fa77168394"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.185333 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1149d0e-e93d-496a-9022-51fa77168394" (UID: "e1149d0e-e93d-496a-9022-51fa77168394"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.190104 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1149d0e-e93d-496a-9022-51fa77168394-kube-api-access-lr6rg" (OuterVolumeSpecName: "kube-api-access-lr6rg") pod "e1149d0e-e93d-496a-9022-51fa77168394" (UID: "e1149d0e-e93d-496a-9022-51fa77168394"). InnerVolumeSpecName "kube-api-access-lr6rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.192348 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "mysql-db") pod "e1149d0e-e93d-496a-9022-51fa77168394" (UID: "e1149d0e-e93d-496a-9022-51fa77168394"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.286605 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e1149d0e-e93d-496a-9022-51fa77168394-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.286705 4899 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.286719 4899 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.286730 4899 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.286741 4899 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e1149d0e-e93d-496a-9022-51fa77168394-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.286751 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr6rg\" (UniqueName: \"kubernetes.io/projected/e1149d0e-e93d-496a-9022-51fa77168394-kube-api-access-lr6rg\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.299027 4899 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.388320 4899 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.966821 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/openstack-galera-0" event={"ID":"e1149d0e-e93d-496a-9022-51fa77168394","Type":"ContainerDied","Data":"f7a8e9a3a8e33284fb060169a552506159f8e7215e03f71f587a6d53ce5f74ba"} Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.966856 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/openstack-galera-0" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.966880 4899 scope.go:117] "RemoveContainer" containerID="78def6462e135e1c51cf7586dd668fc67c6c03b3a74bd3086155f1b882d87166" Jan 26 21:18:23 crc kubenswrapper[4899]: I0126 21:18:23.984578 4899 scope.go:117] "RemoveContainer" containerID="5240bbe23a4b730e4418cafb12a3771a829533f3f69052df351bc927050ae35d" Jan 26 21:18:24 crc kubenswrapper[4899]: I0126 21:18:24.002994 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/openstack-galera-0"] Jan 26 21:18:24 crc kubenswrapper[4899]: I0126 21:18:24.007317 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/openstack-galera-0"] Jan 26 21:18:24 crc kubenswrapper[4899]: I0126 21:18:24.940840 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93293cee-6c86-4865-8a19-b43659a851f3" path="/var/lib/kubelet/pods/93293cee-6c86-4865-8a19-b43659a851f3/volumes" Jan 26 21:18:24 crc kubenswrapper[4899]: I0126 21:18:24.942454 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1149d0e-e93d-496a-9022-51fa77168394" path="/var/lib/kubelet/pods/e1149d0e-e93d-496a-9022-51fa77168394/volumes" Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.066117 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.069855 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw87l\" (UniqueName: \"kubernetes.io/projected/fdfa7325-0ae2-44cb-9523-21010e9af015-kube-api-access-mw87l\") pod \"fdfa7325-0ae2-44cb-9523-21010e9af015\" (UID: \"fdfa7325-0ae2-44cb-9523-21010e9af015\") " Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.070936 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfa7325-0ae2-44cb-9523-21010e9af015-config-data\") pod \"fdfa7325-0ae2-44cb-9523-21010e9af015\" (UID: \"fdfa7325-0ae2-44cb-9523-21010e9af015\") " Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.070982 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fdfa7325-0ae2-44cb-9523-21010e9af015-job-config-data\") pod \"fdfa7325-0ae2-44cb-9523-21010e9af015\" (UID: \"fdfa7325-0ae2-44cb-9523-21010e9af015\") " Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.076974 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfa7325-0ae2-44cb-9523-21010e9af015-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "fdfa7325-0ae2-44cb-9523-21010e9af015" (UID: "fdfa7325-0ae2-44cb-9523-21010e9af015"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.076960 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfa7325-0ae2-44cb-9523-21010e9af015-kube-api-access-mw87l" (OuterVolumeSpecName: "kube-api-access-mw87l") pod "fdfa7325-0ae2-44cb-9523-21010e9af015" (UID: "fdfa7325-0ae2-44cb-9523-21010e9af015"). InnerVolumeSpecName "kube-api-access-mw87l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.084113 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfa7325-0ae2-44cb-9523-21010e9af015-config-data" (OuterVolumeSpecName: "config-data") pod "fdfa7325-0ae2-44cb-9523-21010e9af015" (UID: "fdfa7325-0ae2-44cb-9523-21010e9af015"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.171900 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw87l\" (UniqueName: \"kubernetes.io/projected/fdfa7325-0ae2-44cb-9523-21010e9af015-kube-api-access-mw87l\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.171955 4899 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfa7325-0ae2-44cb-9523-21010e9af015-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.171976 4899 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fdfa7325-0ae2-44cb-9523-21010e9af015-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.666003 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.666182 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" event={"ID":"fdfa7325-0ae2-44cb-9523-21010e9af015","Type":"ContainerDied","Data":"df21c8e473e4b7eb0f4f9e4436f59d7a9826f580b3af5ecb8e9db00c392a946e"} Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.666270 4899 scope.go:117] "RemoveContainer" containerID="df21c8e473e4b7eb0f4f9e4436f59d7a9826f580b3af5ecb8e9db00c392a946e" Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.670007 4899 generic.go:334] "Generic (PLEG): container finished" podID="fdfa7325-0ae2-44cb-9523-21010e9af015" containerID="df21c8e473e4b7eb0f4f9e4436f59d7a9826f580b3af5ecb8e9db00c392a946e" exitCode=137 Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.670058 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c" event={"ID":"fdfa7325-0ae2-44cb-9523-21010e9af015","Type":"ContainerDied","Data":"6d3792446af040c848bd3b5815b6ec95eb9c53918810d47af30900ad168cdc27"} Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.691644 4899 scope.go:117] "RemoveContainer" containerID="df21c8e473e4b7eb0f4f9e4436f59d7a9826f580b3af5ecb8e9db00c392a946e" Jan 26 21:18:38 crc kubenswrapper[4899]: E0126 21:18:38.692139 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df21c8e473e4b7eb0f4f9e4436f59d7a9826f580b3af5ecb8e9db00c392a946e\": container with ID starting with df21c8e473e4b7eb0f4f9e4436f59d7a9826f580b3af5ecb8e9db00c392a946e not found: ID does not exist" containerID="df21c8e473e4b7eb0f4f9e4436f59d7a9826f580b3af5ecb8e9db00c392a946e" Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.692169 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df21c8e473e4b7eb0f4f9e4436f59d7a9826f580b3af5ecb8e9db00c392a946e"} err="failed to get container status \"df21c8e473e4b7eb0f4f9e4436f59d7a9826f580b3af5ecb8e9db00c392a946e\": rpc error: code = NotFound desc = could not find container \"df21c8e473e4b7eb0f4f9e4436f59d7a9826f580b3af5ecb8e9db00c392a946e\": container with ID starting with df21c8e473e4b7eb0f4f9e4436f59d7a9826f580b3af5ecb8e9db00c392a946e not found: ID does not exist" Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.693109 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c"] Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.699825 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/manila-service-cleanup-n5b5h655-9kt7c"] Jan 26 21:18:38 crc kubenswrapper[4899]: I0126 21:18:38.944222 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdfa7325-0ae2-44cb-9523-21010e9af015" path="/var/lib/kubelet/pods/fdfa7325-0ae2-44cb-9523-21010e9af015/volumes" Jan 26 21:18:50 crc kubenswrapper[4899]: I0126 21:18:50.766799 4899 generic.go:334] "Generic (PLEG): container finished" podID="951664be-c618-4a13-8265-32cf5a4d7cf1" containerID="f2260c1878f0f80c6406c66bf8626f4036e6bab59943aaea1d5243720753b490" exitCode=137 Jan 26 21:18:50 crc kubenswrapper[4899]: I0126 21:18:50.766953 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/ceph" event={"ID":"951664be-c618-4a13-8265-32cf5a4d7cf1","Type":"ContainerDied","Data":"f2260c1878f0f80c6406c66bf8626f4036e6bab59943aaea1d5243720753b490"} Jan 26 21:18:50 crc kubenswrapper[4899]: I0126 21:18:50.767385 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="manila-kuttl-tests/ceph" event={"ID":"951664be-c618-4a13-8265-32cf5a4d7cf1","Type":"ContainerDied","Data":"cd7f6fa26b8a5a6d30eb55e7eef02bbc78970f8fe4c5ac26927eb2fa291b67ee"} Jan 26 21:18:50 crc kubenswrapper[4899]: I0126 21:18:50.767411 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd7f6fa26b8a5a6d30eb55e7eef02bbc78970f8fe4c5ac26927eb2fa291b67ee" Jan 26 21:18:50 crc kubenswrapper[4899]: I0126 21:18:50.801070 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/ceph" Jan 26 21:18:50 crc kubenswrapper[4899]: I0126 21:18:50.934522 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-log\") pod \"951664be-c618-4a13-8265-32cf5a4d7cf1\" (UID: \"951664be-c618-4a13-8265-32cf5a4d7cf1\") " Jan 26 21:18:50 crc kubenswrapper[4899]: I0126 21:18:50.934937 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-data\") pod \"951664be-c618-4a13-8265-32cf5a4d7cf1\" (UID: \"951664be-c618-4a13-8265-32cf5a4d7cf1\") " Jan 26 21:18:50 crc kubenswrapper[4899]: I0126 21:18:50.934993 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hsns\" (UniqueName: \"kubernetes.io/projected/951664be-c618-4a13-8265-32cf5a4d7cf1-kube-api-access-4hsns\") pod \"951664be-c618-4a13-8265-32cf5a4d7cf1\" (UID: \"951664be-c618-4a13-8265-32cf5a4d7cf1\") " Jan 26 21:18:50 crc kubenswrapper[4899]: I0126 21:18:50.935031 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-run\") pod \"951664be-c618-4a13-8265-32cf5a4d7cf1\" (UID: \"951664be-c618-4a13-8265-32cf5a4d7cf1\") " Jan 26 21:18:50 crc kubenswrapper[4899]: I0126 21:18:50.935759 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-log" (OuterVolumeSpecName: "log") pod "951664be-c618-4a13-8265-32cf5a4d7cf1" (UID: "951664be-c618-4a13-8265-32cf5a4d7cf1"). InnerVolumeSpecName "log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:18:50 crc kubenswrapper[4899]: I0126 21:18:50.935813 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-run" (OuterVolumeSpecName: "run") pod "951664be-c618-4a13-8265-32cf5a4d7cf1" (UID: "951664be-c618-4a13-8265-32cf5a4d7cf1"). InnerVolumeSpecName "run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:18:50 crc kubenswrapper[4899]: I0126 21:18:50.940705 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/951664be-c618-4a13-8265-32cf5a4d7cf1-kube-api-access-4hsns" (OuterVolumeSpecName: "kube-api-access-4hsns") pod "951664be-c618-4a13-8265-32cf5a4d7cf1" (UID: "951664be-c618-4a13-8265-32cf5a4d7cf1"). InnerVolumeSpecName "kube-api-access-4hsns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:50 crc kubenswrapper[4899]: I0126 21:18:50.941628 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-data" (OuterVolumeSpecName: "data") pod "951664be-c618-4a13-8265-32cf5a4d7cf1" (UID: "951664be-c618-4a13-8265-32cf5a4d7cf1"). InnerVolumeSpecName "data". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:18:51 crc kubenswrapper[4899]: I0126 21:18:51.036393 4899 reconciler_common.go:293] "Volume detached for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-log\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:51 crc kubenswrapper[4899]: I0126 21:18:51.036705 4899 reconciler_common.go:293] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-data\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:51 crc kubenswrapper[4899]: I0126 21:18:51.036788 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hsns\" (UniqueName: \"kubernetes.io/projected/951664be-c618-4a13-8265-32cf5a4d7cf1-kube-api-access-4hsns\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:51 crc kubenswrapper[4899]: I0126 21:18:51.036865 4899 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/951664be-c618-4a13-8265-32cf5a4d7cf1-run\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:51 crc kubenswrapper[4899]: I0126 21:18:51.774905 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="manila-kuttl-tests/ceph" Jan 26 21:18:51 crc kubenswrapper[4899]: I0126 21:18:51.808181 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["manila-kuttl-tests/ceph"] Jan 26 21:18:51 crc kubenswrapper[4899]: I0126 21:18:51.813954 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["manila-kuttl-tests/ceph"] Jan 26 21:18:52 crc kubenswrapper[4899]: I0126 21:18:52.938632 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="951664be-c618-4a13-8265-32cf5a4d7cf1" path="/var/lib/kubelet/pods/951664be-c618-4a13-8265-32cf5a4d7cf1/volumes" Jan 26 21:18:55 crc kubenswrapper[4899]: I0126 21:18:55.926452 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52"] Jan 26 21:18:55 crc kubenswrapper[4899]: I0126 21:18:55.926999 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" podUID="e0134143-cc77-4e5e-8ae8-1e431f6e32bc" containerName="manager" containerID="cri-o://2824018950b84b0563c475c6bb42a452da4b695e93fa0b3167aeef1a27c8b630" gracePeriod=10 Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.257279 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-4tsbp"] Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.257784 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-index-4tsbp" podUID="ba72f737-1c99-4652-b573-d3a6b5c5a191" containerName="registry-server" containerID="cri-o://b7878d9ad5edaf7ed5fea2f15a9fc37a54ab97fccf1c0addc0723fcd611f9701" gracePeriod=30 Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.272688 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc"] Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.277441 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efbj8tc"] Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.627735 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-4tsbp" Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.732494 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wgxh\" (UniqueName: \"kubernetes.io/projected/ba72f737-1c99-4652-b573-d3a6b5c5a191-kube-api-access-2wgxh\") pod \"ba72f737-1c99-4652-b573-d3a6b5c5a191\" (UID: \"ba72f737-1c99-4652-b573-d3a6b5c5a191\") " Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.743663 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba72f737-1c99-4652-b573-d3a6b5c5a191-kube-api-access-2wgxh" (OuterVolumeSpecName: "kube-api-access-2wgxh") pod "ba72f737-1c99-4652-b573-d3a6b5c5a191" (UID: "ba72f737-1c99-4652-b573-d3a6b5c5a191"). InnerVolumeSpecName "kube-api-access-2wgxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.811467 4899 generic.go:334] "Generic (PLEG): container finished" podID="ba72f737-1c99-4652-b573-d3a6b5c5a191" containerID="b7878d9ad5edaf7ed5fea2f15a9fc37a54ab97fccf1c0addc0723fcd611f9701" exitCode=0 Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.811530 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-4tsbp" Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.811527 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-4tsbp" event={"ID":"ba72f737-1c99-4652-b573-d3a6b5c5a191","Type":"ContainerDied","Data":"b7878d9ad5edaf7ed5fea2f15a9fc37a54ab97fccf1c0addc0723fcd611f9701"} Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.812067 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-4tsbp" event={"ID":"ba72f737-1c99-4652-b573-d3a6b5c5a191","Type":"ContainerDied","Data":"cad2ec4b4997fe5a43fd3f878a8e0fdac74a5f7abcc81cbcadda0d0c08aa1195"} Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.812103 4899 scope.go:117] "RemoveContainer" containerID="b7878d9ad5edaf7ed5fea2f15a9fc37a54ab97fccf1c0addc0723fcd611f9701" Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.813725 4899 generic.go:334] "Generic (PLEG): container finished" podID="e0134143-cc77-4e5e-8ae8-1e431f6e32bc" containerID="2824018950b84b0563c475c6bb42a452da4b695e93fa0b3167aeef1a27c8b630" exitCode=0 Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.813768 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" event={"ID":"e0134143-cc77-4e5e-8ae8-1e431f6e32bc","Type":"ContainerDied","Data":"2824018950b84b0563c475c6bb42a452da4b695e93fa0b3167aeef1a27c8b630"} Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.815698 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.833937 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ljjf\" (UniqueName: \"kubernetes.io/projected/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-kube-api-access-8ljjf\") pod \"e0134143-cc77-4e5e-8ae8-1e431f6e32bc\" (UID: \"e0134143-cc77-4e5e-8ae8-1e431f6e32bc\") " Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.833998 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-webhook-cert\") pod \"e0134143-cc77-4e5e-8ae8-1e431f6e32bc\" (UID: \"e0134143-cc77-4e5e-8ae8-1e431f6e32bc\") " Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.834025 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-apiservice-cert\") pod \"e0134143-cc77-4e5e-8ae8-1e431f6e32bc\" (UID: \"e0134143-cc77-4e5e-8ae8-1e431f6e32bc\") " Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.834328 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wgxh\" (UniqueName: \"kubernetes.io/projected/ba72f737-1c99-4652-b573-d3a6b5c5a191-kube-api-access-2wgxh\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.843506 4899 scope.go:117] "RemoveContainer" containerID="b7878d9ad5edaf7ed5fea2f15a9fc37a54ab97fccf1c0addc0723fcd611f9701" Jan 26 21:18:56 crc kubenswrapper[4899]: E0126 21:18:56.844122 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7878d9ad5edaf7ed5fea2f15a9fc37a54ab97fccf1c0addc0723fcd611f9701\": container with ID starting with b7878d9ad5edaf7ed5fea2f15a9fc37a54ab97fccf1c0addc0723fcd611f9701 not found: ID does not exist" containerID="b7878d9ad5edaf7ed5fea2f15a9fc37a54ab97fccf1c0addc0723fcd611f9701" Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.844154 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7878d9ad5edaf7ed5fea2f15a9fc37a54ab97fccf1c0addc0723fcd611f9701"} err="failed to get container status \"b7878d9ad5edaf7ed5fea2f15a9fc37a54ab97fccf1c0addc0723fcd611f9701\": rpc error: code = NotFound desc = could not find container \"b7878d9ad5edaf7ed5fea2f15a9fc37a54ab97fccf1c0addc0723fcd611f9701\": container with ID starting with b7878d9ad5edaf7ed5fea2f15a9fc37a54ab97fccf1c0addc0723fcd611f9701 not found: ID does not exist" Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.856113 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-kube-api-access-8ljjf" (OuterVolumeSpecName: "kube-api-access-8ljjf") pod "e0134143-cc77-4e5e-8ae8-1e431f6e32bc" (UID: "e0134143-cc77-4e5e-8ae8-1e431f6e32bc"). InnerVolumeSpecName "kube-api-access-8ljjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.857155 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "e0134143-cc77-4e5e-8ae8-1e431f6e32bc" (UID: "e0134143-cc77-4e5e-8ae8-1e431f6e32bc"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.861428 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "e0134143-cc77-4e5e-8ae8-1e431f6e32bc" (UID: "e0134143-cc77-4e5e-8ae8-1e431f6e32bc"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.861506 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-4tsbp"] Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.867697 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-index-4tsbp"] Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.935767 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ljjf\" (UniqueName: \"kubernetes.io/projected/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-kube-api-access-8ljjf\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.935826 4899 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.935837 4899 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0134143-cc77-4e5e-8ae8-1e431f6e32bc-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.938383 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b537c2b0-ed88-404b-89ab-3259ac07f08e" path="/var/lib/kubelet/pods/b537c2b0-ed88-404b-89ab-3259ac07f08e/volumes" Jan 26 21:18:56 crc kubenswrapper[4899]: I0126 21:18:56.939044 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba72f737-1c99-4652-b573-d3a6b5c5a191" path="/var/lib/kubelet/pods/ba72f737-1c99-4652-b573-d3a6b5c5a191/volumes" Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.330684 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx"] Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.331063 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx" podUID="ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0" containerName="operator" containerID="cri-o://7bfb40871c77b67283f7d1cd0c5ee1be8ac01a06600c4f448a23b9ca43f2b486" gracePeriod=10 Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.567473 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-d2bdt"] Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.567765 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" podUID="18a84050-0343-41d2-ab82-1831b3e653d9" containerName="registry-server" containerID="cri-o://2bb6b52109c55b4be7d9b86200e4e5a27888577a8fac982c6e321db06d46cc87" gracePeriod=30 Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.608258 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp"] Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.616100 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590r92xp"] Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.711280 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx" Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.747802 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tbcz\" (UniqueName: \"kubernetes.io/projected/ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0-kube-api-access-7tbcz\") pod \"ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0\" (UID: \"ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0\") " Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.754226 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0-kube-api-access-7tbcz" (OuterVolumeSpecName: "kube-api-access-7tbcz") pod "ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0" (UID: "ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0"). InnerVolumeSpecName "kube-api-access-7tbcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.826730 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" event={"ID":"e0134143-cc77-4e5e-8ae8-1e431f6e32bc","Type":"ContainerDied","Data":"88cf8ac7d4ea160dd883ecaf376bb7ca93df513bc769f2353901d37d1f561591"} Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.827032 4899 scope.go:117] "RemoveContainer" containerID="2824018950b84b0563c475c6bb42a452da4b695e93fa0b3167aeef1a27c8b630" Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.827127 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52" Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.831067 4899 generic.go:334] "Generic (PLEG): container finished" podID="ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0" containerID="7bfb40871c77b67283f7d1cd0c5ee1be8ac01a06600c4f448a23b9ca43f2b486" exitCode=0 Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.831143 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx" event={"ID":"ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0","Type":"ContainerDied","Data":"7bfb40871c77b67283f7d1cd0c5ee1be8ac01a06600c4f448a23b9ca43f2b486"} Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.831180 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx" event={"ID":"ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0","Type":"ContainerDied","Data":"379339beac979adcfc93551657239aaaee4a20e5aae06c3248d23fcf819a4df7"} Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.831228 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx" Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.834468 4899 generic.go:334] "Generic (PLEG): container finished" podID="18a84050-0343-41d2-ab82-1831b3e653d9" containerID="2bb6b52109c55b4be7d9b86200e4e5a27888577a8fac982c6e321db06d46cc87" exitCode=0 Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.834509 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" event={"ID":"18a84050-0343-41d2-ab82-1831b3e653d9","Type":"ContainerDied","Data":"2bb6b52109c55b4be7d9b86200e4e5a27888577a8fac982c6e321db06d46cc87"} Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.848885 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tbcz\" (UniqueName: \"kubernetes.io/projected/ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0-kube-api-access-7tbcz\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.856154 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52"] Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.863683 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77c4c5f769-kdd52"] Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.866562 4899 scope.go:117] "RemoveContainer" containerID="7bfb40871c77b67283f7d1cd0c5ee1be8ac01a06600c4f448a23b9ca43f2b486" Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.868478 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx"] Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.872848 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-j6sdx"] Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.885244 4899 scope.go:117] "RemoveContainer" containerID="7bfb40871c77b67283f7d1cd0c5ee1be8ac01a06600c4f448a23b9ca43f2b486" Jan 26 21:18:57 crc kubenswrapper[4899]: E0126 21:18:57.893382 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bfb40871c77b67283f7d1cd0c5ee1be8ac01a06600c4f448a23b9ca43f2b486\": container with ID starting with 7bfb40871c77b67283f7d1cd0c5ee1be8ac01a06600c4f448a23b9ca43f2b486 not found: ID does not exist" containerID="7bfb40871c77b67283f7d1cd0c5ee1be8ac01a06600c4f448a23b9ca43f2b486" Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.893420 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bfb40871c77b67283f7d1cd0c5ee1be8ac01a06600c4f448a23b9ca43f2b486"} err="failed to get container status \"7bfb40871c77b67283f7d1cd0c5ee1be8ac01a06600c4f448a23b9ca43f2b486\": rpc error: code = NotFound desc = could not find container \"7bfb40871c77b67283f7d1cd0c5ee1be8ac01a06600c4f448a23b9ca43f2b486\": container with ID starting with 7bfb40871c77b67283f7d1cd0c5ee1be8ac01a06600c4f448a23b9ca43f2b486 not found: ID does not exist" Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.914384 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.956337 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncr6d\" (UniqueName: \"kubernetes.io/projected/18a84050-0343-41d2-ab82-1831b3e653d9-kube-api-access-ncr6d\") pod \"18a84050-0343-41d2-ab82-1831b3e653d9\" (UID: \"18a84050-0343-41d2-ab82-1831b3e653d9\") " Jan 26 21:18:57 crc kubenswrapper[4899]: I0126 21:18:57.960360 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a84050-0343-41d2-ab82-1831b3e653d9-kube-api-access-ncr6d" (OuterVolumeSpecName: "kube-api-access-ncr6d") pod "18a84050-0343-41d2-ab82-1831b3e653d9" (UID: "18a84050-0343-41d2-ab82-1831b3e653d9"). InnerVolumeSpecName "kube-api-access-ncr6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:18:58 crc kubenswrapper[4899]: I0126 21:18:58.058255 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncr6d\" (UniqueName: \"kubernetes.io/projected/18a84050-0343-41d2-ab82-1831b3e653d9-kube-api-access-ncr6d\") on node \"crc\" DevicePath \"\"" Jan 26 21:18:58 crc kubenswrapper[4899]: I0126 21:18:58.846751 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" Jan 26 21:18:58 crc kubenswrapper[4899]: I0126 21:18:58.846739 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-d2bdt" event={"ID":"18a84050-0343-41d2-ab82-1831b3e653d9","Type":"ContainerDied","Data":"430552626b8a5f0e429cdee781db05651be75a5668891c5b006ccd9c9976b52b"} Jan 26 21:18:58 crc kubenswrapper[4899]: I0126 21:18:58.846988 4899 scope.go:117] "RemoveContainer" containerID="2bb6b52109c55b4be7d9b86200e4e5a27888577a8fac982c6e321db06d46cc87" Jan 26 21:18:58 crc kubenswrapper[4899]: I0126 21:18:58.890659 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-d2bdt"] Jan 26 21:18:58 crc kubenswrapper[4899]: I0126 21:18:58.896557 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-d2bdt"] Jan 26 21:18:58 crc kubenswrapper[4899]: I0126 21:18:58.937951 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a84050-0343-41d2-ab82-1831b3e653d9" path="/var/lib/kubelet/pods/18a84050-0343-41d2-ab82-1831b3e653d9/volumes" Jan 26 21:18:58 crc kubenswrapper[4899]: I0126 21:18:58.938589 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b167cf4e-88b9-485d-a032-5767edc49205" path="/var/lib/kubelet/pods/b167cf4e-88b9-485d-a032-5767edc49205/volumes" Jan 26 21:18:58 crc kubenswrapper[4899]: I0126 21:18:58.939196 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0134143-cc77-4e5e-8ae8-1e431f6e32bc" path="/var/lib/kubelet/pods/e0134143-cc77-4e5e-8ae8-1e431f6e32bc/volumes" Jan 26 21:18:58 crc kubenswrapper[4899]: I0126 21:18:58.940121 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0" path="/var/lib/kubelet/pods/ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0/volumes" Jan 26 21:19:03 crc kubenswrapper[4899]: I0126 21:19:03.511883 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk"] Jan 26 21:19:03 crc kubenswrapper[4899]: I0126 21:19:03.512599 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" podUID="f89e50e9-8464-4607-ba9f-97e83b9f09ae" containerName="manager" containerID="cri-o://3a90cbc7dd776d4c119df39cfbf42140429ddff24b5c1eace176a432e1975f12" gracePeriod=10 Jan 26 21:19:03 crc kubenswrapper[4899]: I0126 21:19:03.865354 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-whj5n"] Jan 26 21:19:03 crc kubenswrapper[4899]: I0126 21:19:03.865610 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-whj5n" podUID="8b6455e9-9d16-4177-a060-0f72c68f12e2" containerName="registry-server" containerID="cri-o://b40b74fc57e5d3eae02f2615d03f5d479e5b7cc86e1a2f01b73b8f79344b27c7" gracePeriod=30 Jan 26 21:19:03 crc kubenswrapper[4899]: I0126 21:19:03.902167 4899 generic.go:334] "Generic (PLEG): container finished" podID="f89e50e9-8464-4607-ba9f-97e83b9f09ae" containerID="3a90cbc7dd776d4c119df39cfbf42140429ddff24b5c1eace176a432e1975f12" exitCode=0 Jan 26 21:19:03 crc kubenswrapper[4899]: I0126 21:19:03.902222 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" event={"ID":"f89e50e9-8464-4607-ba9f-97e83b9f09ae","Type":"ContainerDied","Data":"3a90cbc7dd776d4c119df39cfbf42140429ddff24b5c1eace176a432e1975f12"} Jan 26 21:19:03 crc kubenswrapper[4899]: I0126 21:19:03.911792 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw"] Jan 26 21:19:03 crc kubenswrapper[4899]: I0126 21:19:03.915646 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/ab4cebf9c8e9911cdf6a66ff2b7d90dca88985d852ea4187b325e8f1625dwlw"] Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.242338 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-whj5n" Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.345401 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c66gj\" (UniqueName: \"kubernetes.io/projected/8b6455e9-9d16-4177-a060-0f72c68f12e2-kube-api-access-c66gj\") pod \"8b6455e9-9d16-4177-a060-0f72c68f12e2\" (UID: \"8b6455e9-9d16-4177-a060-0f72c68f12e2\") " Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.358160 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b6455e9-9d16-4177-a060-0f72c68f12e2-kube-api-access-c66gj" (OuterVolumeSpecName: "kube-api-access-c66gj") pod "8b6455e9-9d16-4177-a060-0f72c68f12e2" (UID: "8b6455e9-9d16-4177-a060-0f72c68f12e2"). InnerVolumeSpecName "kube-api-access-c66gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.447109 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c66gj\" (UniqueName: \"kubernetes.io/projected/8b6455e9-9d16-4177-a060-0f72c68f12e2-kube-api-access-c66gj\") on node \"crc\" DevicePath \"\"" Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.461874 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.548069 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfvbr\" (UniqueName: \"kubernetes.io/projected/f89e50e9-8464-4607-ba9f-97e83b9f09ae-kube-api-access-cfvbr\") pod \"f89e50e9-8464-4607-ba9f-97e83b9f09ae\" (UID: \"f89e50e9-8464-4607-ba9f-97e83b9f09ae\") " Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.548130 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f89e50e9-8464-4607-ba9f-97e83b9f09ae-webhook-cert\") pod \"f89e50e9-8464-4607-ba9f-97e83b9f09ae\" (UID: \"f89e50e9-8464-4607-ba9f-97e83b9f09ae\") " Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.548208 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f89e50e9-8464-4607-ba9f-97e83b9f09ae-apiservice-cert\") pod \"f89e50e9-8464-4607-ba9f-97e83b9f09ae\" (UID: \"f89e50e9-8464-4607-ba9f-97e83b9f09ae\") " Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.550840 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f89e50e9-8464-4607-ba9f-97e83b9f09ae-kube-api-access-cfvbr" (OuterVolumeSpecName: "kube-api-access-cfvbr") pod "f89e50e9-8464-4607-ba9f-97e83b9f09ae" (UID: "f89e50e9-8464-4607-ba9f-97e83b9f09ae"). InnerVolumeSpecName "kube-api-access-cfvbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.550857 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89e50e9-8464-4607-ba9f-97e83b9f09ae-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f89e50e9-8464-4607-ba9f-97e83b9f09ae" (UID: "f89e50e9-8464-4607-ba9f-97e83b9f09ae"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.551051 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89e50e9-8464-4607-ba9f-97e83b9f09ae-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "f89e50e9-8464-4607-ba9f-97e83b9f09ae" (UID: "f89e50e9-8464-4607-ba9f-97e83b9f09ae"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.651678 4899 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f89e50e9-8464-4607-ba9f-97e83b9f09ae-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.651718 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfvbr\" (UniqueName: \"kubernetes.io/projected/f89e50e9-8464-4607-ba9f-97e83b9f09ae-kube-api-access-cfvbr\") on node \"crc\" DevicePath \"\"" Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.651731 4899 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f89e50e9-8464-4607-ba9f-97e83b9f09ae-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.851075 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j"] Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.851348 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" podUID="06708d72-0e7f-4c79-b25e-09103c6e3fc4" containerName="manager" containerID="cri-o://c47c9288ec9afbe71c7f7a4267722948f45ffe33cd7a2b0e9526ae3ec6315375" gracePeriod=10 Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.918845 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.918861 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk" event={"ID":"f89e50e9-8464-4607-ba9f-97e83b9f09ae","Type":"ContainerDied","Data":"129aa01a298f5c14a5a6bdd4cf8edf6e3c0e66bcfb5308b723e12aa3f493287b"} Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.918919 4899 scope.go:117] "RemoveContainer" containerID="3a90cbc7dd776d4c119df39cfbf42140429ddff24b5c1eace176a432e1975f12" Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.920486 4899 generic.go:334] "Generic (PLEG): container finished" podID="8b6455e9-9d16-4177-a060-0f72c68f12e2" containerID="b40b74fc57e5d3eae02f2615d03f5d479e5b7cc86e1a2f01b73b8f79344b27c7" exitCode=0 Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.920521 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-whj5n" event={"ID":"8b6455e9-9d16-4177-a060-0f72c68f12e2","Type":"ContainerDied","Data":"b40b74fc57e5d3eae02f2615d03f5d479e5b7cc86e1a2f01b73b8f79344b27c7"} Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.920581 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-whj5n" event={"ID":"8b6455e9-9d16-4177-a060-0f72c68f12e2","Type":"ContainerDied","Data":"24506e9ab5da1d16a0ee95595471ebd2f4b8a53b9ee39cab6ee360f2dfdde282"} Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.920540 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-whj5n" Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.937613 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77881c29-649c-4e59-8c20-8d468f552536" path="/var/lib/kubelet/pods/77881c29-649c-4e59-8c20-8d468f552536/volumes" Jan 26 21:19:04 crc kubenswrapper[4899]: I0126 21:19:04.992296 4899 scope.go:117] "RemoveContainer" containerID="b40b74fc57e5d3eae02f2615d03f5d479e5b7cc86e1a2f01b73b8f79344b27c7" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.008348 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-whj5n"] Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.011437 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-whj5n"] Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.014113 4899 scope.go:117] "RemoveContainer" containerID="b40b74fc57e5d3eae02f2615d03f5d479e5b7cc86e1a2f01b73b8f79344b27c7" Jan 26 21:19:05 crc kubenswrapper[4899]: E0126 21:19:05.015365 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b40b74fc57e5d3eae02f2615d03f5d479e5b7cc86e1a2f01b73b8f79344b27c7\": container with ID starting with b40b74fc57e5d3eae02f2615d03f5d479e5b7cc86e1a2f01b73b8f79344b27c7 not found: ID does not exist" containerID="b40b74fc57e5d3eae02f2615d03f5d479e5b7cc86e1a2f01b73b8f79344b27c7" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.015403 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40b74fc57e5d3eae02f2615d03f5d479e5b7cc86e1a2f01b73b8f79344b27c7"} err="failed to get container status \"b40b74fc57e5d3eae02f2615d03f5d479e5b7cc86e1a2f01b73b8f79344b27c7\": rpc error: code = NotFound desc = could not find container \"b40b74fc57e5d3eae02f2615d03f5d479e5b7cc86e1a2f01b73b8f79344b27c7\": container with ID starting with b40b74fc57e5d3eae02f2615d03f5d479e5b7cc86e1a2f01b73b8f79344b27c7 not found: ID does not exist" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.021971 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk"] Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.024743 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5789d54c4b-2jdpk"] Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.224243 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-cn6jg"] Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.224453 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-cn6jg" podUID="e843f00e-9baa-4509-8226-a90bae3a2451" containerName="registry-server" containerID="cri-o://2e112c429587ea9557c3e727de245fadb5485e691ea1d74f51f56b99129dd302" gracePeriod=30 Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.261536 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9"] Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.267754 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/c6fda70cdd0b39e1f547ec11eeb9ee3e5adfccb1f8dc8681f58c1e1ad9j9zr9"] Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.374788 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.462561 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06708d72-0e7f-4c79-b25e-09103c6e3fc4-webhook-cert\") pod \"06708d72-0e7f-4c79-b25e-09103c6e3fc4\" (UID: \"06708d72-0e7f-4c79-b25e-09103c6e3fc4\") " Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.462612 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8rt4\" (UniqueName: \"kubernetes.io/projected/06708d72-0e7f-4c79-b25e-09103c6e3fc4-kube-api-access-t8rt4\") pod \"06708d72-0e7f-4c79-b25e-09103c6e3fc4\" (UID: \"06708d72-0e7f-4c79-b25e-09103c6e3fc4\") " Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.462632 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06708d72-0e7f-4c79-b25e-09103c6e3fc4-apiservice-cert\") pod \"06708d72-0e7f-4c79-b25e-09103c6e3fc4\" (UID: \"06708d72-0e7f-4c79-b25e-09103c6e3fc4\") " Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.467103 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06708d72-0e7f-4c79-b25e-09103c6e3fc4-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "06708d72-0e7f-4c79-b25e-09103c6e3fc4" (UID: "06708d72-0e7f-4c79-b25e-09103c6e3fc4"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.467143 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06708d72-0e7f-4c79-b25e-09103c6e3fc4-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "06708d72-0e7f-4c79-b25e-09103c6e3fc4" (UID: "06708d72-0e7f-4c79-b25e-09103c6e3fc4"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.467147 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06708d72-0e7f-4c79-b25e-09103c6e3fc4-kube-api-access-t8rt4" (OuterVolumeSpecName: "kube-api-access-t8rt4") pod "06708d72-0e7f-4c79-b25e-09103c6e3fc4" (UID: "06708d72-0e7f-4c79-b25e-09103c6e3fc4"). InnerVolumeSpecName "kube-api-access-t8rt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.566605 4899 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06708d72-0e7f-4c79-b25e-09103c6e3fc4-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.566650 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8rt4\" (UniqueName: \"kubernetes.io/projected/06708d72-0e7f-4c79-b25e-09103c6e3fc4-kube-api-access-t8rt4\") on node \"crc\" DevicePath \"\"" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.566662 4899 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06708d72-0e7f-4c79-b25e-09103c6e3fc4-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.701896 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-cn6jg" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.769217 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twhhq\" (UniqueName: \"kubernetes.io/projected/e843f00e-9baa-4509-8226-a90bae3a2451-kube-api-access-twhhq\") pod \"e843f00e-9baa-4509-8226-a90bae3a2451\" (UID: \"e843f00e-9baa-4509-8226-a90bae3a2451\") " Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.778510 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e843f00e-9baa-4509-8226-a90bae3a2451-kube-api-access-twhhq" (OuterVolumeSpecName: "kube-api-access-twhhq") pod "e843f00e-9baa-4509-8226-a90bae3a2451" (UID: "e843f00e-9baa-4509-8226-a90bae3a2451"). InnerVolumeSpecName "kube-api-access-twhhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.871060 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twhhq\" (UniqueName: \"kubernetes.io/projected/e843f00e-9baa-4509-8226-a90bae3a2451-kube-api-access-twhhq\") on node \"crc\" DevicePath \"\"" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.927874 4899 generic.go:334] "Generic (PLEG): container finished" podID="06708d72-0e7f-4c79-b25e-09103c6e3fc4" containerID="c47c9288ec9afbe71c7f7a4267722948f45ffe33cd7a2b0e9526ae3ec6315375" exitCode=0 Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.927976 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.928131 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" event={"ID":"06708d72-0e7f-4c79-b25e-09103c6e3fc4","Type":"ContainerDied","Data":"c47c9288ec9afbe71c7f7a4267722948f45ffe33cd7a2b0e9526ae3ec6315375"} Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.928179 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j" event={"ID":"06708d72-0e7f-4c79-b25e-09103c6e3fc4","Type":"ContainerDied","Data":"8e53c23862858b34c191f5725afa2f4f9c62be7df22f6ecb8bec581d5335b26f"} Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.928197 4899 scope.go:117] "RemoveContainer" containerID="c47c9288ec9afbe71c7f7a4267722948f45ffe33cd7a2b0e9526ae3ec6315375" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.932580 4899 generic.go:334] "Generic (PLEG): container finished" podID="e843f00e-9baa-4509-8226-a90bae3a2451" containerID="2e112c429587ea9557c3e727de245fadb5485e691ea1d74f51f56b99129dd302" exitCode=0 Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.932624 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-cn6jg" event={"ID":"e843f00e-9baa-4509-8226-a90bae3a2451","Type":"ContainerDied","Data":"2e112c429587ea9557c3e727de245fadb5485e691ea1d74f51f56b99129dd302"} Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.932655 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-cn6jg" event={"ID":"e843f00e-9baa-4509-8226-a90bae3a2451","Type":"ContainerDied","Data":"7ef4295308aaf9168d6b9697dd674b43064e81ddddd36e787483b6e5480640fc"} Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.932630 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-cn6jg" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.947664 4899 scope.go:117] "RemoveContainer" containerID="c47c9288ec9afbe71c7f7a4267722948f45ffe33cd7a2b0e9526ae3ec6315375" Jan 26 21:19:05 crc kubenswrapper[4899]: E0126 21:19:05.948216 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c47c9288ec9afbe71c7f7a4267722948f45ffe33cd7a2b0e9526ae3ec6315375\": container with ID starting with c47c9288ec9afbe71c7f7a4267722948f45ffe33cd7a2b0e9526ae3ec6315375 not found: ID does not exist" containerID="c47c9288ec9afbe71c7f7a4267722948f45ffe33cd7a2b0e9526ae3ec6315375" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.948259 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47c9288ec9afbe71c7f7a4267722948f45ffe33cd7a2b0e9526ae3ec6315375"} err="failed to get container status \"c47c9288ec9afbe71c7f7a4267722948f45ffe33cd7a2b0e9526ae3ec6315375\": rpc error: code = NotFound desc = could not find container \"c47c9288ec9afbe71c7f7a4267722948f45ffe33cd7a2b0e9526ae3ec6315375\": container with ID starting with c47c9288ec9afbe71c7f7a4267722948f45ffe33cd7a2b0e9526ae3ec6315375 not found: ID does not exist" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.948289 4899 scope.go:117] "RemoveContainer" containerID="2e112c429587ea9557c3e727de245fadb5485e691ea1d74f51f56b99129dd302" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.962855 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j"] Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.965592 4899 scope.go:117] "RemoveContainer" containerID="2e112c429587ea9557c3e727de245fadb5485e691ea1d74f51f56b99129dd302" Jan 26 21:19:05 crc kubenswrapper[4899]: E0126 21:19:05.966270 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e112c429587ea9557c3e727de245fadb5485e691ea1d74f51f56b99129dd302\": container with ID starting with 2e112c429587ea9557c3e727de245fadb5485e691ea1d74f51f56b99129dd302 not found: ID does not exist" containerID="2e112c429587ea9557c3e727de245fadb5485e691ea1d74f51f56b99129dd302" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.966304 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e112c429587ea9557c3e727de245fadb5485e691ea1d74f51f56b99129dd302"} err="failed to get container status \"2e112c429587ea9557c3e727de245fadb5485e691ea1d74f51f56b99129dd302\": rpc error: code = NotFound desc = could not find container \"2e112c429587ea9557c3e727de245fadb5485e691ea1d74f51f56b99129dd302\": container with ID starting with 2e112c429587ea9557c3e727de245fadb5485e691ea1d74f51f56b99129dd302 not found: ID does not exist" Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.970204 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7d8d94bbd6-zn79j"] Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.974052 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-cn6jg"] Jan 26 21:19:05 crc kubenswrapper[4899]: I0126 21:19:05.977553 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-cn6jg"] Jan 26 21:19:06 crc kubenswrapper[4899]: I0126 21:19:06.940755 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06708d72-0e7f-4c79-b25e-09103c6e3fc4" path="/var/lib/kubelet/pods/06708d72-0e7f-4c79-b25e-09103c6e3fc4/volumes" Jan 26 21:19:06 crc kubenswrapper[4899]: I0126 21:19:06.942316 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b6455e9-9d16-4177-a060-0f72c68f12e2" path="/var/lib/kubelet/pods/8b6455e9-9d16-4177-a060-0f72c68f12e2/volumes" Jan 26 21:19:06 crc kubenswrapper[4899]: I0126 21:19:06.943199 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f152a72-a91c-420a-a87e-a3a5b07bfe7b" path="/var/lib/kubelet/pods/8f152a72-a91c-420a-a87e-a3a5b07bfe7b/volumes" Jan 26 21:19:06 crc kubenswrapper[4899]: I0126 21:19:06.944507 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e843f00e-9baa-4509-8226-a90bae3a2451" path="/var/lib/kubelet/pods/e843f00e-9baa-4509-8226-a90bae3a2451/volumes" Jan 26 21:19:06 crc kubenswrapper[4899]: I0126 21:19:06.945070 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f89e50e9-8464-4607-ba9f-97e83b9f09ae" path="/var/lib/kubelet/pods/f89e50e9-8464-4607-ba9f-97e83b9f09ae/volumes" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.404509 4899 scope.go:117] "RemoveContainer" containerID="a1f6d8b9cd8e4346edb9826d736ffd19197b9c4573847353c5e8ed20e06d6443" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.443135 4899 scope.go:117] "RemoveContainer" containerID="0949c2a521d3a4b80a574ccf22e3111d270f6a139c1b0ec5e6a568969dd7cfa8" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.461747 4899 scope.go:117] "RemoveContainer" containerID="5a4b7b12abaf313fbd20db307215ff14307bcbe06e080a43181b829ef0feb5e7" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.485077 4899 scope.go:117] "RemoveContainer" containerID="f2260c1878f0f80c6406c66bf8626f4036e6bab59943aaea1d5243720753b490" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.517603 4899 scope.go:117] "RemoveContainer" containerID="29e24ef8ab52a7de4c3756110d4fe7fba6266bac920ee6b372cc6279374069e0" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.533397 4899 scope.go:117] "RemoveContainer" containerID="37c703bff6a0059d609e6116d28217fa8b8b28b3a53ad45bb7f1275b2bd1446d" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.553327 4899 scope.go:117] "RemoveContainer" containerID="cd9a23f4ec7372dbf26294faa8f4a368dd88a125c276a70a2b41495672c78589" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.582193 4899 scope.go:117] "RemoveContainer" containerID="5aea9d5a9207310bc145db795d8311f8723356334feaea7be3c965be90c14888" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.599082 4899 scope.go:117] "RemoveContainer" containerID="4254f5894fc9842672915c313d09f71455e399b0a67f6b8ea50bcd9dc9a61926" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.616683 4899 scope.go:117] "RemoveContainer" containerID="2528605dc2ee759014314217bc33a4b9311bfb4874ac4288f66b4c65a6e048ba" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.649773 4899 scope.go:117] "RemoveContainer" containerID="a76d55c10e4b48d800c14ecf7c884466851e49b1fed31835457352172400a960" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.665273 4899 scope.go:117] "RemoveContainer" containerID="b163ea38606c5443d2fbae9278cfe6c8b71e2ad920fc9b653db3620cb4031072" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.680071 4899 scope.go:117] "RemoveContainer" containerID="30f0120d85ad97519e7148002e75cfef37d2cb51a195564b21970b667b2df9f0" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.695441 4899 scope.go:117] "RemoveContainer" containerID="a23c18f9f54b53c233d0fb7b0cc84351b4afa0e96471c277b8e2870d151fafb3" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.714604 4899 scope.go:117] "RemoveContainer" containerID="a8a76e249db1ad6589f4bae5c7f2b45259c587258f709853efa84ca116055c61" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.729001 4899 scope.go:117] "RemoveContainer" containerID="3a0e9779a42c2b693f9013ebcb4445da85ff2ddd0bfbe08fbf47f3bb7cfa969a" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.740951 4899 scope.go:117] "RemoveContainer" containerID="0e78b16a213017dbe04ebf891ddcbcf672337af40f3e3e0e5b75c31e2719551f" Jan 26 21:19:16 crc kubenswrapper[4899]: I0126 21:19:16.755810 4899 scope.go:117] "RemoveContainer" containerID="14875699b6f89f706d1e3913351c8304135ee0f875f0db77b66f6555212a776c" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.268350 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rzmz2/must-gather-hdb56"] Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269217 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e843f00e-9baa-4509-8226-a90bae3a2451" containerName="registry-server" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269234 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="e843f00e-9baa-4509-8226-a90bae3a2451" containerName="registry-server" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269243 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951664be-c618-4a13-8265-32cf5a4d7cf1" containerName="ceph" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269251 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="951664be-c618-4a13-8265-32cf5a4d7cf1" containerName="ceph" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269265 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfa7325-0ae2-44cb-9523-21010e9af015" containerName="manila-service-cleanup-n5b5h655" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269273 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfa7325-0ae2-44cb-9523-21010e9af015" containerName="manila-service-cleanup-n5b5h655" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269287 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0134143-cc77-4e5e-8ae8-1e431f6e32bc" containerName="manager" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269297 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0134143-cc77-4e5e-8ae8-1e431f6e32bc" containerName="manager" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269309 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d25306a-7534-45dc-a752-efdb1bb3c2f8" containerName="galera" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269316 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d25306a-7534-45dc-a752-efdb1bb3c2f8" containerName="galera" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269327 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a84050-0343-41d2-ab82-1831b3e653d9" containerName="registry-server" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269334 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a84050-0343-41d2-ab82-1831b3e653d9" containerName="registry-server" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269346 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6455e9-9d16-4177-a060-0f72c68f12e2" containerName="registry-server" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269354 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6455e9-9d16-4177-a060-0f72c68f12e2" containerName="registry-server" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269362 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06708d72-0e7f-4c79-b25e-09103c6e3fc4" containerName="manager" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269369 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="06708d72-0e7f-4c79-b25e-09103c6e3fc4" containerName="manager" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269380 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc45ce4-b190-43f2-ad5a-d738caf6f033" containerName="mariadb-account-delete" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269388 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc45ce4-b190-43f2-ad5a-d738caf6f033" containerName="mariadb-account-delete" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269398 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea09b8ff-8868-45dc-92e5-bdee96d13107" containerName="memcached" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269406 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea09b8ff-8868-45dc-92e5-bdee96d13107" containerName="memcached" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269417 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba72f737-1c99-4652-b573-d3a6b5c5a191" containerName="registry-server" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269424 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba72f737-1c99-4652-b573-d3a6b5c5a191" containerName="registry-server" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269434 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1149d0e-e93d-496a-9022-51fa77168394" containerName="galera" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269441 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1149d0e-e93d-496a-9022-51fa77168394" containerName="galera" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269447 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0" containerName="keystone-api" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269454 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0" containerName="keystone-api" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269467 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d25306a-7534-45dc-a752-efdb1bb3c2f8" containerName="mysql-bootstrap" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269474 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d25306a-7534-45dc-a752-efdb1bb3c2f8" containerName="mysql-bootstrap" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269484 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b49daa9-f343-4c81-88d5-ded2e08582aa" containerName="setup-container" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269490 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b49daa9-f343-4c81-88d5-ded2e08582aa" containerName="setup-container" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269500 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f89e50e9-8464-4607-ba9f-97e83b9f09ae" containerName="manager" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269507 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="f89e50e9-8464-4607-ba9f-97e83b9f09ae" containerName="manager" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269517 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b49daa9-f343-4c81-88d5-ded2e08582aa" containerName="rabbitmq" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269524 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b49daa9-f343-4c81-88d5-ded2e08582aa" containerName="rabbitmq" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269539 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1149d0e-e93d-496a-9022-51fa77168394" containerName="mysql-bootstrap" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269546 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1149d0e-e93d-496a-9022-51fa77168394" containerName="mysql-bootstrap" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269558 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93293cee-6c86-4865-8a19-b43659a851f3" containerName="galera" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269565 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="93293cee-6c86-4865-8a19-b43659a851f3" containerName="galera" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269574 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93293cee-6c86-4865-8a19-b43659a851f3" containerName="mysql-bootstrap" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269581 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="93293cee-6c86-4865-8a19-b43659a851f3" containerName="mysql-bootstrap" Jan 26 21:19:29 crc kubenswrapper[4899]: E0126 21:19:29.269589 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0" containerName="operator" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269597 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0" containerName="operator" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269713 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="e843f00e-9baa-4509-8226-a90bae3a2451" containerName="registry-server" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269727 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdfa7325-0ae2-44cb-9523-21010e9af015" containerName="manila-service-cleanup-n5b5h655" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269737 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="06708d72-0e7f-4c79-b25e-09103c6e3fc4" containerName="manager" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269745 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba72f737-1c99-4652-b573-d3a6b5c5a191" containerName="registry-server" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269757 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c1bff1c-ce1a-4adf-96f2-b0ac5b108ce0" containerName="keystone-api" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269768 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="93293cee-6c86-4865-8a19-b43659a851f3" containerName="galera" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269778 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea09b8ff-8868-45dc-92e5-bdee96d13107" containerName="memcached" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269785 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3dc3a1-675d-4a01-9e79-2b243ce6cdb0" containerName="operator" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269796 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d25306a-7534-45dc-a752-efdb1bb3c2f8" containerName="galera" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269806 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="951664be-c618-4a13-8265-32cf5a4d7cf1" containerName="ceph" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269816 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1149d0e-e93d-496a-9022-51fa77168394" containerName="galera" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269824 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc45ce4-b190-43f2-ad5a-d738caf6f033" containerName="mariadb-account-delete" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269835 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89e50e9-8464-4607-ba9f-97e83b9f09ae" containerName="manager" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269843 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b49daa9-f343-4c81-88d5-ded2e08582aa" containerName="rabbitmq" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269852 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b6455e9-9d16-4177-a060-0f72c68f12e2" containerName="registry-server" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269861 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0134143-cc77-4e5e-8ae8-1e431f6e32bc" containerName="manager" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.269870 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a84050-0343-41d2-ab82-1831b3e653d9" containerName="registry-server" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.270682 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rzmz2/must-gather-hdb56" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.272946 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rzmz2"/"openshift-service-ca.crt" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.273136 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rzmz2"/"kube-root-ca.crt" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.279357 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rzmz2/must-gather-hdb56"] Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.403474 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mjpp\" (UniqueName: \"kubernetes.io/projected/c052e247-0e73-40f2-a41c-96e408983b75-kube-api-access-4mjpp\") pod \"must-gather-hdb56\" (UID: \"c052e247-0e73-40f2-a41c-96e408983b75\") " pod="openshift-must-gather-rzmz2/must-gather-hdb56" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.403598 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c052e247-0e73-40f2-a41c-96e408983b75-must-gather-output\") pod \"must-gather-hdb56\" (UID: \"c052e247-0e73-40f2-a41c-96e408983b75\") " pod="openshift-must-gather-rzmz2/must-gather-hdb56" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.504377 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c052e247-0e73-40f2-a41c-96e408983b75-must-gather-output\") pod \"must-gather-hdb56\" (UID: \"c052e247-0e73-40f2-a41c-96e408983b75\") " pod="openshift-must-gather-rzmz2/must-gather-hdb56" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.504464 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mjpp\" (UniqueName: \"kubernetes.io/projected/c052e247-0e73-40f2-a41c-96e408983b75-kube-api-access-4mjpp\") pod \"must-gather-hdb56\" (UID: \"c052e247-0e73-40f2-a41c-96e408983b75\") " pod="openshift-must-gather-rzmz2/must-gather-hdb56" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.505293 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c052e247-0e73-40f2-a41c-96e408983b75-must-gather-output\") pod \"must-gather-hdb56\" (UID: \"c052e247-0e73-40f2-a41c-96e408983b75\") " pod="openshift-must-gather-rzmz2/must-gather-hdb56" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.522772 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mjpp\" (UniqueName: \"kubernetes.io/projected/c052e247-0e73-40f2-a41c-96e408983b75-kube-api-access-4mjpp\") pod \"must-gather-hdb56\" (UID: \"c052e247-0e73-40f2-a41c-96e408983b75\") " pod="openshift-must-gather-rzmz2/must-gather-hdb56" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.590531 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rzmz2/must-gather-hdb56" Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.797323 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rzmz2/must-gather-hdb56"] Jan 26 21:19:29 crc kubenswrapper[4899]: I0126 21:19:29.808034 4899 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 21:19:30 crc kubenswrapper[4899]: I0126 21:19:30.097040 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rzmz2/must-gather-hdb56" event={"ID":"c052e247-0e73-40f2-a41c-96e408983b75","Type":"ContainerStarted","Data":"3d8fe5117b2d1492aac19c13c58871d35bbef3121a09c8dde60f2dec43e723fd"} Jan 26 21:19:38 crc kubenswrapper[4899]: I0126 21:19:38.147058 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rzmz2/must-gather-hdb56" event={"ID":"c052e247-0e73-40f2-a41c-96e408983b75","Type":"ContainerStarted","Data":"55adbc267ebd6e69b2e141a89aa38155529aaa677633ad0f73020490762d99cd"} Jan 26 21:19:38 crc kubenswrapper[4899]: I0126 21:19:38.147636 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rzmz2/must-gather-hdb56" event={"ID":"c052e247-0e73-40f2-a41c-96e408983b75","Type":"ContainerStarted","Data":"e729f17e8ce1bde998d6dc9f582f88ef368b3c612b5920f007a6ee735cd3465c"} Jan 26 21:19:38 crc kubenswrapper[4899]: I0126 21:19:38.163674 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rzmz2/must-gather-hdb56" podStartSLOduration=1.860605697 podStartE2EDuration="9.163655779s" podCreationTimestamp="2026-01-26 21:19:29 +0000 UTC" firstStartedPulling="2026-01-26 21:19:29.807921438 +0000 UTC m=+1459.189509465" lastFinishedPulling="2026-01-26 21:19:37.11097151 +0000 UTC m=+1466.492559547" observedRunningTime="2026-01-26 21:19:38.162367082 +0000 UTC m=+1467.543955149" watchObservedRunningTime="2026-01-26 21:19:38.163655779 +0000 UTC m=+1467.545243836" Jan 26 21:20:17 crc kubenswrapper[4899]: I0126 21:20:17.003465 4899 scope.go:117] "RemoveContainer" containerID="3f6a1501309f4e03724c58de6aec82442a41524ccf2beb9610e16ef341bb4858" Jan 26 21:20:17 crc kubenswrapper[4899]: I0126 21:20:17.033565 4899 scope.go:117] "RemoveContainer" containerID="a32cc094b587ebae9b7509d546f95399c82ee7ad73fdc5c08f331277ada92de0" Jan 26 21:20:17 crc kubenswrapper[4899]: I0126 21:20:17.061478 4899 scope.go:117] "RemoveContainer" containerID="88eac7395b04ea1aa8b113b3fe8dfa17b3b137a066beb16947291821671750db" Jan 26 21:20:17 crc kubenswrapper[4899]: I0126 21:20:17.086651 4899 scope.go:117] "RemoveContainer" containerID="4a0238be6bd14d1b7d37e317ac6550ae222d03d3441ce68f6a5ac116ea49192f" Jan 26 21:20:17 crc kubenswrapper[4899]: I0126 21:20:17.123856 4899 scope.go:117] "RemoveContainer" containerID="8ecf27c40c20b8ae4e41e82b29bc3326f03e7eabdc3a698331d8e84bcb44660a" Jan 26 21:20:17 crc kubenswrapper[4899]: I0126 21:20:17.137267 4899 scope.go:117] "RemoveContainer" containerID="8aea9cd4643d28009c3bb744ec5f32fb433ed098cd10d859e4e3026e71e96ac9" Jan 26 21:20:22 crc kubenswrapper[4899]: I0126 21:20:22.712756 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5kjdm_7ce93bf9-c281-45cc-9697-8a6a8eb9d6e0/control-plane-machine-set-operator/0.log" Jan 26 21:20:22 crc kubenswrapper[4899]: I0126 21:20:22.888642 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lxbfv_53f1cb30-6429-4ebc-8301-5f1de3e70611/kube-rbac-proxy/0.log" Jan 26 21:20:22 crc kubenswrapper[4899]: I0126 21:20:22.923738 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lxbfv_53f1cb30-6429-4ebc-8301-5f1de3e70611/machine-api-operator/0.log" Jan 26 21:20:30 crc kubenswrapper[4899]: I0126 21:20:30.109973 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:20:30 crc kubenswrapper[4899]: I0126 21:20:30.110313 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:20:50 crc kubenswrapper[4899]: I0126 21:20:50.233598 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-k5x85_887bd990-cb6d-4f69-bcf2-cf642b2c165b/kube-rbac-proxy/0.log" Jan 26 21:20:50 crc kubenswrapper[4899]: I0126 21:20:50.291496 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-k5x85_887bd990-cb6d-4f69-bcf2-cf642b2c165b/controller/0.log" Jan 26 21:20:50 crc kubenswrapper[4899]: I0126 21:20:50.433019 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-frr-files/0.log" Jan 26 21:20:50 crc kubenswrapper[4899]: I0126 21:20:50.577887 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-reloader/0.log" Jan 26 21:20:50 crc kubenswrapper[4899]: I0126 21:20:50.592836 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-frr-files/0.log" Jan 26 21:20:50 crc kubenswrapper[4899]: I0126 21:20:50.624344 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-metrics/0.log" Jan 26 21:20:50 crc kubenswrapper[4899]: I0126 21:20:50.633880 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-reloader/0.log" Jan 26 21:20:50 crc kubenswrapper[4899]: I0126 21:20:50.807566 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-metrics/0.log" Jan 26 21:20:50 crc kubenswrapper[4899]: I0126 21:20:50.824165 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-metrics/0.log" Jan 26 21:20:50 crc kubenswrapper[4899]: I0126 21:20:50.828233 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-reloader/0.log" Jan 26 21:20:50 crc kubenswrapper[4899]: I0126 21:20:50.833543 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-frr-files/0.log" Jan 26 21:20:50 crc kubenswrapper[4899]: I0126 21:20:50.982055 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-reloader/0.log" Jan 26 21:20:50 crc kubenswrapper[4899]: I0126 21:20:50.988584 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-frr-files/0.log" Jan 26 21:20:50 crc kubenswrapper[4899]: I0126 21:20:50.988780 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-metrics/0.log" Jan 26 21:20:51 crc kubenswrapper[4899]: I0126 21:20:51.028255 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/controller/0.log" Jan 26 21:20:51 crc kubenswrapper[4899]: I0126 21:20:51.167288 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/frr-metrics/0.log" Jan 26 21:20:51 crc kubenswrapper[4899]: I0126 21:20:51.205592 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/kube-rbac-proxy/0.log" Jan 26 21:20:51 crc kubenswrapper[4899]: I0126 21:20:51.230879 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/kube-rbac-proxy-frr/0.log" Jan 26 21:20:51 crc kubenswrapper[4899]: I0126 21:20:51.419178 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/reloader/0.log" Jan 26 21:20:51 crc kubenswrapper[4899]: I0126 21:20:51.426686 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-5kknz_2c74cccf-4954-447b-90d6-438a41878caa/frr-k8s-webhook-server/0.log" Jan 26 21:20:51 crc kubenswrapper[4899]: I0126 21:20:51.617384 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-78b88669b5-qgw6p_65a48fb2-a892-4d8e-96ba-7fee5747d2f3/manager/0.log" Jan 26 21:20:51 crc kubenswrapper[4899]: I0126 21:20:51.627717 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/frr/0.log" Jan 26 21:20:51 crc kubenswrapper[4899]: I0126 21:20:51.799005 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-d9559955b-jj9n5_7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec/webhook-server/0.log" Jan 26 21:20:51 crc kubenswrapper[4899]: I0126 21:20:51.839247 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ql4jc_5aede76a-7f3b-4b2d-827f-5aae59a3a65f/kube-rbac-proxy/0.log" Jan 26 21:20:51 crc kubenswrapper[4899]: I0126 21:20:51.957097 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ql4jc_5aede76a-7f3b-4b2d-827f-5aae59a3a65f/speaker/0.log" Jan 26 21:21:00 crc kubenswrapper[4899]: I0126 21:21:00.109973 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:21:00 crc kubenswrapper[4899]: I0126 21:21:00.110616 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:21:05 crc kubenswrapper[4899]: I0126 21:21:05.887363 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s54t4"] Jan 26 21:21:05 crc kubenswrapper[4899]: I0126 21:21:05.889109 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:05 crc kubenswrapper[4899]: I0126 21:21:05.900860 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s54t4"] Jan 26 21:21:06 crc kubenswrapper[4899]: I0126 21:21:06.009390 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m6j2\" (UniqueName: \"kubernetes.io/projected/335d2162-69b9-448c-af07-d0df93cdf597-kube-api-access-8m6j2\") pod \"redhat-marketplace-s54t4\" (UID: \"335d2162-69b9-448c-af07-d0df93cdf597\") " pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:06 crc kubenswrapper[4899]: I0126 21:21:06.009548 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/335d2162-69b9-448c-af07-d0df93cdf597-catalog-content\") pod \"redhat-marketplace-s54t4\" (UID: \"335d2162-69b9-448c-af07-d0df93cdf597\") " pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:06 crc kubenswrapper[4899]: I0126 21:21:06.009572 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/335d2162-69b9-448c-af07-d0df93cdf597-utilities\") pod \"redhat-marketplace-s54t4\" (UID: \"335d2162-69b9-448c-af07-d0df93cdf597\") " pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:06 crc kubenswrapper[4899]: I0126 21:21:06.110851 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/335d2162-69b9-448c-af07-d0df93cdf597-catalog-content\") pod \"redhat-marketplace-s54t4\" (UID: \"335d2162-69b9-448c-af07-d0df93cdf597\") " pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:06 crc kubenswrapper[4899]: I0126 21:21:06.110909 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/335d2162-69b9-448c-af07-d0df93cdf597-utilities\") pod \"redhat-marketplace-s54t4\" (UID: \"335d2162-69b9-448c-af07-d0df93cdf597\") " pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:06 crc kubenswrapper[4899]: I0126 21:21:06.110951 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m6j2\" (UniqueName: \"kubernetes.io/projected/335d2162-69b9-448c-af07-d0df93cdf597-kube-api-access-8m6j2\") pod \"redhat-marketplace-s54t4\" (UID: \"335d2162-69b9-448c-af07-d0df93cdf597\") " pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:06 crc kubenswrapper[4899]: I0126 21:21:06.111509 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/335d2162-69b9-448c-af07-d0df93cdf597-catalog-content\") pod \"redhat-marketplace-s54t4\" (UID: \"335d2162-69b9-448c-af07-d0df93cdf597\") " pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:06 crc kubenswrapper[4899]: I0126 21:21:06.111586 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/335d2162-69b9-448c-af07-d0df93cdf597-utilities\") pod \"redhat-marketplace-s54t4\" (UID: \"335d2162-69b9-448c-af07-d0df93cdf597\") " pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:06 crc kubenswrapper[4899]: I0126 21:21:06.132512 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m6j2\" (UniqueName: \"kubernetes.io/projected/335d2162-69b9-448c-af07-d0df93cdf597-kube-api-access-8m6j2\") pod \"redhat-marketplace-s54t4\" (UID: \"335d2162-69b9-448c-af07-d0df93cdf597\") " pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:06 crc kubenswrapper[4899]: I0126 21:21:06.206614 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:06 crc kubenswrapper[4899]: I0126 21:21:06.418669 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s54t4"] Jan 26 21:21:06 crc kubenswrapper[4899]: I0126 21:21:06.718132 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s54t4" event={"ID":"335d2162-69b9-448c-af07-d0df93cdf597","Type":"ContainerStarted","Data":"508f9802322e01c8c29b5b68f439516c09a0df49e0391cb49f0f6064f6e9bdc3"} Jan 26 21:21:07 crc kubenswrapper[4899]: I0126 21:21:07.725301 4899 generic.go:334] "Generic (PLEG): container finished" podID="335d2162-69b9-448c-af07-d0df93cdf597" containerID="bfba837602e57a4cc1fb79a18065690b067a5775abb80ad289cc16bc9137b15a" exitCode=0 Jan 26 21:21:07 crc kubenswrapper[4899]: I0126 21:21:07.725397 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s54t4" event={"ID":"335d2162-69b9-448c-af07-d0df93cdf597","Type":"ContainerDied","Data":"bfba837602e57a4cc1fb79a18065690b067a5775abb80ad289cc16bc9137b15a"} Jan 26 21:21:08 crc kubenswrapper[4899]: I0126 21:21:08.731485 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s54t4" event={"ID":"335d2162-69b9-448c-af07-d0df93cdf597","Type":"ContainerStarted","Data":"06f23d1dec1923f403bf5c3849ee9ae10a5585824302203bc8f2d4e353b72171"} Jan 26 21:21:09 crc kubenswrapper[4899]: I0126 21:21:09.738826 4899 generic.go:334] "Generic (PLEG): container finished" podID="335d2162-69b9-448c-af07-d0df93cdf597" containerID="06f23d1dec1923f403bf5c3849ee9ae10a5585824302203bc8f2d4e353b72171" exitCode=0 Jan 26 21:21:09 crc kubenswrapper[4899]: I0126 21:21:09.738886 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s54t4" event={"ID":"335d2162-69b9-448c-af07-d0df93cdf597","Type":"ContainerDied","Data":"06f23d1dec1923f403bf5c3849ee9ae10a5585824302203bc8f2d4e353b72171"} Jan 26 21:21:10 crc kubenswrapper[4899]: I0126 21:21:10.747162 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s54t4" event={"ID":"335d2162-69b9-448c-af07-d0df93cdf597","Type":"ContainerStarted","Data":"14b66226157b16a9057e74ed7bd9f7b9656fba233e24d9d82aa77a4893ed10b7"} Jan 26 21:21:10 crc kubenswrapper[4899]: I0126 21:21:10.766236 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s54t4" podStartSLOduration=3.335481528 podStartE2EDuration="5.766212749s" podCreationTimestamp="2026-01-26 21:21:05 +0000 UTC" firstStartedPulling="2026-01-26 21:21:07.726761911 +0000 UTC m=+1557.108349948" lastFinishedPulling="2026-01-26 21:21:10.157493132 +0000 UTC m=+1559.539081169" observedRunningTime="2026-01-26 21:21:10.761098025 +0000 UTC m=+1560.142686062" watchObservedRunningTime="2026-01-26 21:21:10.766212749 +0000 UTC m=+1560.147800786" Jan 26 21:21:14 crc kubenswrapper[4899]: I0126 21:21:14.820843 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lkld6"] Jan 26 21:21:14 crc kubenswrapper[4899]: I0126 21:21:14.822519 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:14 crc kubenswrapper[4899]: I0126 21:21:14.836512 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lkld6"] Jan 26 21:21:14 crc kubenswrapper[4899]: I0126 21:21:14.924171 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d6a4fd4-0f21-43b7-a94a-13b506122741-utilities\") pod \"community-operators-lkld6\" (UID: \"0d6a4fd4-0f21-43b7-a94a-13b506122741\") " pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:14 crc kubenswrapper[4899]: I0126 21:21:14.924541 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qqlw\" (UniqueName: \"kubernetes.io/projected/0d6a4fd4-0f21-43b7-a94a-13b506122741-kube-api-access-7qqlw\") pod \"community-operators-lkld6\" (UID: \"0d6a4fd4-0f21-43b7-a94a-13b506122741\") " pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:14 crc kubenswrapper[4899]: I0126 21:21:14.924580 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d6a4fd4-0f21-43b7-a94a-13b506122741-catalog-content\") pod \"community-operators-lkld6\" (UID: \"0d6a4fd4-0f21-43b7-a94a-13b506122741\") " pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:15 crc kubenswrapper[4899]: I0126 21:21:15.025851 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d6a4fd4-0f21-43b7-a94a-13b506122741-utilities\") pod \"community-operators-lkld6\" (UID: \"0d6a4fd4-0f21-43b7-a94a-13b506122741\") " pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:15 crc kubenswrapper[4899]: I0126 21:21:15.025895 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qqlw\" (UniqueName: \"kubernetes.io/projected/0d6a4fd4-0f21-43b7-a94a-13b506122741-kube-api-access-7qqlw\") pod \"community-operators-lkld6\" (UID: \"0d6a4fd4-0f21-43b7-a94a-13b506122741\") " pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:15 crc kubenswrapper[4899]: I0126 21:21:15.025959 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d6a4fd4-0f21-43b7-a94a-13b506122741-catalog-content\") pod \"community-operators-lkld6\" (UID: \"0d6a4fd4-0f21-43b7-a94a-13b506122741\") " pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:15 crc kubenswrapper[4899]: I0126 21:21:15.026578 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d6a4fd4-0f21-43b7-a94a-13b506122741-utilities\") pod \"community-operators-lkld6\" (UID: \"0d6a4fd4-0f21-43b7-a94a-13b506122741\") " pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:15 crc kubenswrapper[4899]: I0126 21:21:15.026821 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d6a4fd4-0f21-43b7-a94a-13b506122741-catalog-content\") pod \"community-operators-lkld6\" (UID: \"0d6a4fd4-0f21-43b7-a94a-13b506122741\") " pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:15 crc kubenswrapper[4899]: I0126 21:21:15.045569 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qqlw\" (UniqueName: \"kubernetes.io/projected/0d6a4fd4-0f21-43b7-a94a-13b506122741-kube-api-access-7qqlw\") pod \"community-operators-lkld6\" (UID: \"0d6a4fd4-0f21-43b7-a94a-13b506122741\") " pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:15 crc kubenswrapper[4899]: I0126 21:21:15.155834 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:15 crc kubenswrapper[4899]: I0126 21:21:15.471533 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lkld6"] Jan 26 21:21:15 crc kubenswrapper[4899]: I0126 21:21:15.775663 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkld6" event={"ID":"0d6a4fd4-0f21-43b7-a94a-13b506122741","Type":"ContainerStarted","Data":"7dfb8272ceff9b6231476ded461fc080220935fa8aedf847d691f967995370a1"} Jan 26 21:21:15 crc kubenswrapper[4899]: I0126 21:21:15.906949 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9_91a627e5-d605-4e13-bec3-0bdfa43e0a72/util/0.log" Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.105870 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9_91a627e5-d605-4e13-bec3-0bdfa43e0a72/util/0.log" Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.110281 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9_91a627e5-d605-4e13-bec3-0bdfa43e0a72/pull/0.log" Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.206735 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.206777 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.207168 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9_91a627e5-d605-4e13-bec3-0bdfa43e0a72/pull/0.log" Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.251505 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.374997 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9_91a627e5-d605-4e13-bec3-0bdfa43e0a72/util/0.log" Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.400481 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9_91a627e5-d605-4e13-bec3-0bdfa43e0a72/pull/0.log" Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.405109 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9_91a627e5-d605-4e13-bec3-0bdfa43e0a72/extract/0.log" Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.545530 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6tzdt_3f877954-92f6-484c-a96e-388422e23f27/extract-utilities/0.log" Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.740749 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6tzdt_3f877954-92f6-484c-a96e-388422e23f27/extract-content/0.log" Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.770733 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6tzdt_3f877954-92f6-484c-a96e-388422e23f27/extract-utilities/0.log" Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.771275 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6tzdt_3f877954-92f6-484c-a96e-388422e23f27/extract-content/0.log" Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.783549 4899 generic.go:334] "Generic (PLEG): container finished" podID="0d6a4fd4-0f21-43b7-a94a-13b506122741" containerID="29b8eee60f829c75cdaf751f3f62e4dfa55d84379b843a916d4f4d26d59ada46" exitCode=0 Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.783594 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkld6" event={"ID":"0d6a4fd4-0f21-43b7-a94a-13b506122741","Type":"ContainerDied","Data":"29b8eee60f829c75cdaf751f3f62e4dfa55d84379b843a916d4f4d26d59ada46"} Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.825222 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.955510 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6tzdt_3f877954-92f6-484c-a96e-388422e23f27/extract-utilities/0.log" Jan 26 21:21:16 crc kubenswrapper[4899]: I0126 21:21:16.976675 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6tzdt_3f877954-92f6-484c-a96e-388422e23f27/extract-content/0.log" Jan 26 21:21:17 crc kubenswrapper[4899]: I0126 21:21:17.212671 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jkhqw_6215d320-2289-4e53-9c43-466c52516a43/extract-utilities/0.log" Jan 26 21:21:17 crc kubenswrapper[4899]: I0126 21:21:17.270550 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6tzdt_3f877954-92f6-484c-a96e-388422e23f27/registry-server/0.log" Jan 26 21:21:17 crc kubenswrapper[4899]: I0126 21:21:17.325606 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jkhqw_6215d320-2289-4e53-9c43-466c52516a43/extract-content/0.log" Jan 26 21:21:17 crc kubenswrapper[4899]: I0126 21:21:17.377622 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jkhqw_6215d320-2289-4e53-9c43-466c52516a43/extract-utilities/0.log" Jan 26 21:21:17 crc kubenswrapper[4899]: I0126 21:21:17.432912 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jkhqw_6215d320-2289-4e53-9c43-466c52516a43/extract-content/0.log" Jan 26 21:21:17 crc kubenswrapper[4899]: I0126 21:21:17.641819 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jkhqw_6215d320-2289-4e53-9c43-466c52516a43/extract-utilities/0.log" Jan 26 21:21:17 crc kubenswrapper[4899]: I0126 21:21:17.681112 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jkhqw_6215d320-2289-4e53-9c43-466c52516a43/extract-content/0.log" Jan 26 21:21:17 crc kubenswrapper[4899]: I0126 21:21:17.795112 4899 generic.go:334] "Generic (PLEG): container finished" podID="0d6a4fd4-0f21-43b7-a94a-13b506122741" containerID="4bab0f6e516a396e76ccb1f6068ebf53e784d8f76cf0ae7a9a7ea1cfe8322410" exitCode=0 Jan 26 21:21:17 crc kubenswrapper[4899]: I0126 21:21:17.795273 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkld6" event={"ID":"0d6a4fd4-0f21-43b7-a94a-13b506122741","Type":"ContainerDied","Data":"4bab0f6e516a396e76ccb1f6068ebf53e784d8f76cf0ae7a9a7ea1cfe8322410"} Jan 26 21:21:17 crc kubenswrapper[4899]: I0126 21:21:17.844299 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lkld6_0d6a4fd4-0f21-43b7-a94a-13b506122741/extract-utilities/0.log" Jan 26 21:21:17 crc kubenswrapper[4899]: I0126 21:21:17.971257 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jkhqw_6215d320-2289-4e53-9c43-466c52516a43/registry-server/0.log" Jan 26 21:21:18 crc kubenswrapper[4899]: I0126 21:21:18.075710 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lkld6_0d6a4fd4-0f21-43b7-a94a-13b506122741/extract-content/0.log" Jan 26 21:21:18 crc kubenswrapper[4899]: I0126 21:21:18.091292 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lkld6_0d6a4fd4-0f21-43b7-a94a-13b506122741/extract-utilities/0.log" Jan 26 21:21:18 crc kubenswrapper[4899]: I0126 21:21:18.110408 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lkld6_0d6a4fd4-0f21-43b7-a94a-13b506122741/extract-content/0.log" Jan 26 21:21:18 crc kubenswrapper[4899]: I0126 21:21:18.305600 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lkld6_0d6a4fd4-0f21-43b7-a94a-13b506122741/extract-content/0.log" Jan 26 21:21:18 crc kubenswrapper[4899]: I0126 21:21:18.324047 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lkld6_0d6a4fd4-0f21-43b7-a94a-13b506122741/extract-utilities/0.log" Jan 26 21:21:18 crc kubenswrapper[4899]: I0126 21:21:18.500674 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fqdv9_6c65153e-2169-4842-9a1c-60b0e20f4255/marketplace-operator/0.log" Jan 26 21:21:18 crc kubenswrapper[4899]: I0126 21:21:18.504939 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9m2hr_67ff9111-7a25-4b47-adb6-4e765311e6d9/extract-utilities/0.log" Jan 26 21:21:18 crc kubenswrapper[4899]: I0126 21:21:18.596138 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s54t4"] Jan 26 21:21:18 crc kubenswrapper[4899]: I0126 21:21:18.716059 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9m2hr_67ff9111-7a25-4b47-adb6-4e765311e6d9/extract-content/0.log" Jan 26 21:21:18 crc kubenswrapper[4899]: I0126 21:21:18.767215 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9m2hr_67ff9111-7a25-4b47-adb6-4e765311e6d9/extract-utilities/0.log" Jan 26 21:21:18 crc kubenswrapper[4899]: I0126 21:21:18.792759 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9m2hr_67ff9111-7a25-4b47-adb6-4e765311e6d9/extract-content/0.log" Jan 26 21:21:18 crc kubenswrapper[4899]: I0126 21:21:18.804047 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkld6" event={"ID":"0d6a4fd4-0f21-43b7-a94a-13b506122741","Type":"ContainerStarted","Data":"84628b1abc611e2a47c228a32f333d83f43886e4e042e0479f006191a320ad6b"} Jan 26 21:21:18 crc kubenswrapper[4899]: I0126 21:21:18.804172 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s54t4" podUID="335d2162-69b9-448c-af07-d0df93cdf597" containerName="registry-server" containerID="cri-o://14b66226157b16a9057e74ed7bd9f7b9656fba233e24d9d82aa77a4893ed10b7" gracePeriod=2 Jan 26 21:21:18 crc kubenswrapper[4899]: I0126 21:21:18.949308 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9m2hr_67ff9111-7a25-4b47-adb6-4e765311e6d9/extract-utilities/0.log" Jan 26 21:21:18 crc kubenswrapper[4899]: I0126 21:21:18.953843 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9m2hr_67ff9111-7a25-4b47-adb6-4e765311e6d9/extract-content/0.log" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.101593 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9m2hr_67ff9111-7a25-4b47-adb6-4e765311e6d9/registry-server/0.log" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.212599 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s54t4_335d2162-69b9-448c-af07-d0df93cdf597/extract-utilities/0.log" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.241120 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.262382 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lkld6" podStartSLOduration=3.79838341 podStartE2EDuration="5.262365377s" podCreationTimestamp="2026-01-26 21:21:14 +0000 UTC" firstStartedPulling="2026-01-26 21:21:16.785325019 +0000 UTC m=+1566.166913056" lastFinishedPulling="2026-01-26 21:21:18.249306996 +0000 UTC m=+1567.630895023" observedRunningTime="2026-01-26 21:21:18.830385254 +0000 UTC m=+1568.211973301" watchObservedRunningTime="2026-01-26 21:21:19.262365377 +0000 UTC m=+1568.643953414" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.282623 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/335d2162-69b9-448c-af07-d0df93cdf597-utilities\") pod \"335d2162-69b9-448c-af07-d0df93cdf597\" (UID: \"335d2162-69b9-448c-af07-d0df93cdf597\") " Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.282687 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m6j2\" (UniqueName: \"kubernetes.io/projected/335d2162-69b9-448c-af07-d0df93cdf597-kube-api-access-8m6j2\") pod \"335d2162-69b9-448c-af07-d0df93cdf597\" (UID: \"335d2162-69b9-448c-af07-d0df93cdf597\") " Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.282788 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/335d2162-69b9-448c-af07-d0df93cdf597-catalog-content\") pod \"335d2162-69b9-448c-af07-d0df93cdf597\" (UID: \"335d2162-69b9-448c-af07-d0df93cdf597\") " Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.305100 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/335d2162-69b9-448c-af07-d0df93cdf597-utilities" (OuterVolumeSpecName: "utilities") pod "335d2162-69b9-448c-af07-d0df93cdf597" (UID: "335d2162-69b9-448c-af07-d0df93cdf597"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.305867 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/335d2162-69b9-448c-af07-d0df93cdf597-kube-api-access-8m6j2" (OuterVolumeSpecName: "kube-api-access-8m6j2") pod "335d2162-69b9-448c-af07-d0df93cdf597" (UID: "335d2162-69b9-448c-af07-d0df93cdf597"). InnerVolumeSpecName "kube-api-access-8m6j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.340438 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/335d2162-69b9-448c-af07-d0df93cdf597-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "335d2162-69b9-448c-af07-d0df93cdf597" (UID: "335d2162-69b9-448c-af07-d0df93cdf597"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.384797 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/335d2162-69b9-448c-af07-d0df93cdf597-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.384844 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/335d2162-69b9-448c-af07-d0df93cdf597-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.384858 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m6j2\" (UniqueName: \"kubernetes.io/projected/335d2162-69b9-448c-af07-d0df93cdf597-kube-api-access-8m6j2\") on node \"crc\" DevicePath \"\"" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.443696 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s54t4_335d2162-69b9-448c-af07-d0df93cdf597/extract-content/0.log" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.447568 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s54t4_335d2162-69b9-448c-af07-d0df93cdf597/extract-utilities/0.log" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.457031 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s54t4_335d2162-69b9-448c-af07-d0df93cdf597/extract-content/0.log" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.648816 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s54t4_335d2162-69b9-448c-af07-d0df93cdf597/extract-utilities/0.log" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.663756 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s54t4_335d2162-69b9-448c-af07-d0df93cdf597/extract-content/0.log" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.714671 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s54t4_335d2162-69b9-448c-af07-d0df93cdf597/registry-server/0.log" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.812482 4899 generic.go:334] "Generic (PLEG): container finished" podID="335d2162-69b9-448c-af07-d0df93cdf597" containerID="14b66226157b16a9057e74ed7bd9f7b9656fba233e24d9d82aa77a4893ed10b7" exitCode=0 Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.812561 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s54t4" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.812591 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s54t4" event={"ID":"335d2162-69b9-448c-af07-d0df93cdf597","Type":"ContainerDied","Data":"14b66226157b16a9057e74ed7bd9f7b9656fba233e24d9d82aa77a4893ed10b7"} Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.812645 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s54t4" event={"ID":"335d2162-69b9-448c-af07-d0df93cdf597","Type":"ContainerDied","Data":"508f9802322e01c8c29b5b68f439516c09a0df49e0391cb49f0f6064f6e9bdc3"} Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.812670 4899 scope.go:117] "RemoveContainer" containerID="14b66226157b16a9057e74ed7bd9f7b9656fba233e24d9d82aa77a4893ed10b7" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.830078 4899 scope.go:117] "RemoveContainer" containerID="06f23d1dec1923f403bf5c3849ee9ae10a5585824302203bc8f2d4e353b72171" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.844051 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s54t4"] Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.854426 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s54t4"] Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.864388 4899 scope.go:117] "RemoveContainer" containerID="bfba837602e57a4cc1fb79a18065690b067a5775abb80ad289cc16bc9137b15a" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.864467 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkn2z_01adb97d-6f07-4768-a883-fbcf0a1777ff/extract-utilities/0.log" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.876841 4899 scope.go:117] "RemoveContainer" containerID="14b66226157b16a9057e74ed7bd9f7b9656fba233e24d9d82aa77a4893ed10b7" Jan 26 21:21:19 crc kubenswrapper[4899]: E0126 21:21:19.877256 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14b66226157b16a9057e74ed7bd9f7b9656fba233e24d9d82aa77a4893ed10b7\": container with ID starting with 14b66226157b16a9057e74ed7bd9f7b9656fba233e24d9d82aa77a4893ed10b7 not found: ID does not exist" containerID="14b66226157b16a9057e74ed7bd9f7b9656fba233e24d9d82aa77a4893ed10b7" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.877285 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14b66226157b16a9057e74ed7bd9f7b9656fba233e24d9d82aa77a4893ed10b7"} err="failed to get container status \"14b66226157b16a9057e74ed7bd9f7b9656fba233e24d9d82aa77a4893ed10b7\": rpc error: code = NotFound desc = could not find container \"14b66226157b16a9057e74ed7bd9f7b9656fba233e24d9d82aa77a4893ed10b7\": container with ID starting with 14b66226157b16a9057e74ed7bd9f7b9656fba233e24d9d82aa77a4893ed10b7 not found: ID does not exist" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.877304 4899 scope.go:117] "RemoveContainer" containerID="06f23d1dec1923f403bf5c3849ee9ae10a5585824302203bc8f2d4e353b72171" Jan 26 21:21:19 crc kubenswrapper[4899]: E0126 21:21:19.877611 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06f23d1dec1923f403bf5c3849ee9ae10a5585824302203bc8f2d4e353b72171\": container with ID starting with 06f23d1dec1923f403bf5c3849ee9ae10a5585824302203bc8f2d4e353b72171 not found: ID does not exist" containerID="06f23d1dec1923f403bf5c3849ee9ae10a5585824302203bc8f2d4e353b72171" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.877664 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f23d1dec1923f403bf5c3849ee9ae10a5585824302203bc8f2d4e353b72171"} err="failed to get container status \"06f23d1dec1923f403bf5c3849ee9ae10a5585824302203bc8f2d4e353b72171\": rpc error: code = NotFound desc = could not find container \"06f23d1dec1923f403bf5c3849ee9ae10a5585824302203bc8f2d4e353b72171\": container with ID starting with 06f23d1dec1923f403bf5c3849ee9ae10a5585824302203bc8f2d4e353b72171 not found: ID does not exist" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.877703 4899 scope.go:117] "RemoveContainer" containerID="bfba837602e57a4cc1fb79a18065690b067a5775abb80ad289cc16bc9137b15a" Jan 26 21:21:19 crc kubenswrapper[4899]: E0126 21:21:19.878107 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfba837602e57a4cc1fb79a18065690b067a5775abb80ad289cc16bc9137b15a\": container with ID starting with bfba837602e57a4cc1fb79a18065690b067a5775abb80ad289cc16bc9137b15a not found: ID does not exist" containerID="bfba837602e57a4cc1fb79a18065690b067a5775abb80ad289cc16bc9137b15a" Jan 26 21:21:19 crc kubenswrapper[4899]: I0126 21:21:19.878130 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfba837602e57a4cc1fb79a18065690b067a5775abb80ad289cc16bc9137b15a"} err="failed to get container status \"bfba837602e57a4cc1fb79a18065690b067a5775abb80ad289cc16bc9137b15a\": rpc error: code = NotFound desc = could not find container \"bfba837602e57a4cc1fb79a18065690b067a5775abb80ad289cc16bc9137b15a\": container with ID starting with bfba837602e57a4cc1fb79a18065690b067a5775abb80ad289cc16bc9137b15a not found: ID does not exist" Jan 26 21:21:20 crc kubenswrapper[4899]: I0126 21:21:20.060209 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkn2z_01adb97d-6f07-4768-a883-fbcf0a1777ff/extract-utilities/0.log" Jan 26 21:21:20 crc kubenswrapper[4899]: I0126 21:21:20.067186 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkn2z_01adb97d-6f07-4768-a883-fbcf0a1777ff/extract-content/0.log" Jan 26 21:21:20 crc kubenswrapper[4899]: I0126 21:21:20.089475 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkn2z_01adb97d-6f07-4768-a883-fbcf0a1777ff/extract-content/0.log" Jan 26 21:21:20 crc kubenswrapper[4899]: I0126 21:21:20.252244 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkn2z_01adb97d-6f07-4768-a883-fbcf0a1777ff/extract-utilities/0.log" Jan 26 21:21:20 crc kubenswrapper[4899]: I0126 21:21:20.334649 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkn2z_01adb97d-6f07-4768-a883-fbcf0a1777ff/extract-content/0.log" Jan 26 21:21:20 crc kubenswrapper[4899]: I0126 21:21:20.548224 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkn2z_01adb97d-6f07-4768-a883-fbcf0a1777ff/registry-server/0.log" Jan 26 21:21:20 crc kubenswrapper[4899]: I0126 21:21:20.937223 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="335d2162-69b9-448c-af07-d0df93cdf597" path="/var/lib/kubelet/pods/335d2162-69b9-448c-af07-d0df93cdf597/volumes" Jan 26 21:21:25 crc kubenswrapper[4899]: I0126 21:21:25.156198 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:25 crc kubenswrapper[4899]: I0126 21:21:25.156516 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:25 crc kubenswrapper[4899]: I0126 21:21:25.204605 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:25 crc kubenswrapper[4899]: I0126 21:21:25.884130 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:25 crc kubenswrapper[4899]: I0126 21:21:25.929562 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lkld6"] Jan 26 21:21:27 crc kubenswrapper[4899]: I0126 21:21:27.872738 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lkld6" podUID="0d6a4fd4-0f21-43b7-a94a-13b506122741" containerName="registry-server" containerID="cri-o://84628b1abc611e2a47c228a32f333d83f43886e4e042e0479f006191a320ad6b" gracePeriod=2 Jan 26 21:21:28 crc kubenswrapper[4899]: I0126 21:21:28.880096 4899 generic.go:334] "Generic (PLEG): container finished" podID="0d6a4fd4-0f21-43b7-a94a-13b506122741" containerID="84628b1abc611e2a47c228a32f333d83f43886e4e042e0479f006191a320ad6b" exitCode=0 Jan 26 21:21:28 crc kubenswrapper[4899]: I0126 21:21:28.880142 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkld6" event={"ID":"0d6a4fd4-0f21-43b7-a94a-13b506122741","Type":"ContainerDied","Data":"84628b1abc611e2a47c228a32f333d83f43886e4e042e0479f006191a320ad6b"} Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.315718 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.419683 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d6a4fd4-0f21-43b7-a94a-13b506122741-utilities\") pod \"0d6a4fd4-0f21-43b7-a94a-13b506122741\" (UID: \"0d6a4fd4-0f21-43b7-a94a-13b506122741\") " Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.419745 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d6a4fd4-0f21-43b7-a94a-13b506122741-catalog-content\") pod \"0d6a4fd4-0f21-43b7-a94a-13b506122741\" (UID: \"0d6a4fd4-0f21-43b7-a94a-13b506122741\") " Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.419841 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qqlw\" (UniqueName: \"kubernetes.io/projected/0d6a4fd4-0f21-43b7-a94a-13b506122741-kube-api-access-7qqlw\") pod \"0d6a4fd4-0f21-43b7-a94a-13b506122741\" (UID: \"0d6a4fd4-0f21-43b7-a94a-13b506122741\") " Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.420875 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d6a4fd4-0f21-43b7-a94a-13b506122741-utilities" (OuterVolumeSpecName: "utilities") pod "0d6a4fd4-0f21-43b7-a94a-13b506122741" (UID: "0d6a4fd4-0f21-43b7-a94a-13b506122741"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.425674 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d6a4fd4-0f21-43b7-a94a-13b506122741-kube-api-access-7qqlw" (OuterVolumeSpecName: "kube-api-access-7qqlw") pod "0d6a4fd4-0f21-43b7-a94a-13b506122741" (UID: "0d6a4fd4-0f21-43b7-a94a-13b506122741"). InnerVolumeSpecName "kube-api-access-7qqlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.470021 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d6a4fd4-0f21-43b7-a94a-13b506122741-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d6a4fd4-0f21-43b7-a94a-13b506122741" (UID: "0d6a4fd4-0f21-43b7-a94a-13b506122741"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.522090 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qqlw\" (UniqueName: \"kubernetes.io/projected/0d6a4fd4-0f21-43b7-a94a-13b506122741-kube-api-access-7qqlw\") on node \"crc\" DevicePath \"\"" Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.522119 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d6a4fd4-0f21-43b7-a94a-13b506122741-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.522128 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d6a4fd4-0f21-43b7-a94a-13b506122741-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.887756 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkld6" event={"ID":"0d6a4fd4-0f21-43b7-a94a-13b506122741","Type":"ContainerDied","Data":"7dfb8272ceff9b6231476ded461fc080220935fa8aedf847d691f967995370a1"} Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.887812 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkld6" Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.887820 4899 scope.go:117] "RemoveContainer" containerID="84628b1abc611e2a47c228a32f333d83f43886e4e042e0479f006191a320ad6b" Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.905078 4899 scope.go:117] "RemoveContainer" containerID="4bab0f6e516a396e76ccb1f6068ebf53e784d8f76cf0ae7a9a7ea1cfe8322410" Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.915865 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lkld6"] Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.921987 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lkld6"] Jan 26 21:21:29 crc kubenswrapper[4899]: I0126 21:21:29.940159 4899 scope.go:117] "RemoveContainer" containerID="29b8eee60f829c75cdaf751f3f62e4dfa55d84379b843a916d4f4d26d59ada46" Jan 26 21:21:30 crc kubenswrapper[4899]: I0126 21:21:30.108971 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:21:30 crc kubenswrapper[4899]: I0126 21:21:30.109039 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:21:30 crc kubenswrapper[4899]: I0126 21:21:30.109089 4899 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 21:21:30 crc kubenswrapper[4899]: I0126 21:21:30.109713 4899 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d"} pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 21:21:30 crc kubenswrapper[4899]: I0126 21:21:30.109779 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" containerID="cri-o://1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" gracePeriod=600 Jan 26 21:21:30 crc kubenswrapper[4899]: E0126 21:21:30.230185 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:21:30 crc kubenswrapper[4899]: I0126 21:21:30.896581 4899 generic.go:334] "Generic (PLEG): container finished" podID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" exitCode=0 Jan 26 21:21:30 crc kubenswrapper[4899]: I0126 21:21:30.896690 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerDied","Data":"1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d"} Jan 26 21:21:30 crc kubenswrapper[4899]: I0126 21:21:30.896788 4899 scope.go:117] "RemoveContainer" containerID="b003bc5d33f730ffb57f781e8537058a3b7ee2bda8e0f8bdef749775797532a8" Jan 26 21:21:30 crc kubenswrapper[4899]: I0126 21:21:30.897452 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:21:30 crc kubenswrapper[4899]: E0126 21:21:30.897745 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:21:30 crc kubenswrapper[4899]: I0126 21:21:30.954448 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d6a4fd4-0f21-43b7-a94a-13b506122741" path="/var/lib/kubelet/pods/0d6a4fd4-0f21-43b7-a94a-13b506122741/volumes" Jan 26 21:21:45 crc kubenswrapper[4899]: I0126 21:21:45.930308 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:21:45 crc kubenswrapper[4899]: E0126 21:21:45.932091 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:21:58 crc kubenswrapper[4899]: I0126 21:21:58.934274 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:21:58 crc kubenswrapper[4899]: E0126 21:21:58.935033 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:22:09 crc kubenswrapper[4899]: I0126 21:22:09.931227 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:22:09 crc kubenswrapper[4899]: E0126 21:22:09.931841 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:22:20 crc kubenswrapper[4899]: I0126 21:22:20.935077 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:22:20 crc kubenswrapper[4899]: E0126 21:22:20.936337 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:22:33 crc kubenswrapper[4899]: I0126 21:22:33.945918 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:22:33 crc kubenswrapper[4899]: E0126 21:22:33.946910 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:22:40 crc kubenswrapper[4899]: I0126 21:22:40.373169 4899 generic.go:334] "Generic (PLEG): container finished" podID="c052e247-0e73-40f2-a41c-96e408983b75" containerID="e729f17e8ce1bde998d6dc9f582f88ef368b3c612b5920f007a6ee735cd3465c" exitCode=0 Jan 26 21:22:40 crc kubenswrapper[4899]: I0126 21:22:40.373244 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rzmz2/must-gather-hdb56" event={"ID":"c052e247-0e73-40f2-a41c-96e408983b75","Type":"ContainerDied","Data":"e729f17e8ce1bde998d6dc9f582f88ef368b3c612b5920f007a6ee735cd3465c"} Jan 26 21:22:40 crc kubenswrapper[4899]: I0126 21:22:40.374067 4899 scope.go:117] "RemoveContainer" containerID="e729f17e8ce1bde998d6dc9f582f88ef368b3c612b5920f007a6ee735cd3465c" Jan 26 21:22:41 crc kubenswrapper[4899]: I0126 21:22:41.172236 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rzmz2_must-gather-hdb56_c052e247-0e73-40f2-a41c-96e408983b75/gather/0.log" Jan 26 21:22:45 crc kubenswrapper[4899]: I0126 21:22:45.930685 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:22:45 crc kubenswrapper[4899]: E0126 21:22:45.931260 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:22:48 crc kubenswrapper[4899]: I0126 21:22:48.121637 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rzmz2/must-gather-hdb56"] Jan 26 21:22:48 crc kubenswrapper[4899]: I0126 21:22:48.122270 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rzmz2/must-gather-hdb56" podUID="c052e247-0e73-40f2-a41c-96e408983b75" containerName="copy" containerID="cri-o://55adbc267ebd6e69b2e141a89aa38155529aaa677633ad0f73020490762d99cd" gracePeriod=2 Jan 26 21:22:48 crc kubenswrapper[4899]: I0126 21:22:48.127764 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rzmz2/must-gather-hdb56"] Jan 26 21:22:48 crc kubenswrapper[4899]: I0126 21:22:48.426056 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rzmz2_must-gather-hdb56_c052e247-0e73-40f2-a41c-96e408983b75/copy/0.log" Jan 26 21:22:48 crc kubenswrapper[4899]: I0126 21:22:48.426821 4899 generic.go:334] "Generic (PLEG): container finished" podID="c052e247-0e73-40f2-a41c-96e408983b75" containerID="55adbc267ebd6e69b2e141a89aa38155529aaa677633ad0f73020490762d99cd" exitCode=143 Jan 26 21:22:48 crc kubenswrapper[4899]: I0126 21:22:48.464193 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rzmz2_must-gather-hdb56_c052e247-0e73-40f2-a41c-96e408983b75/copy/0.log" Jan 26 21:22:48 crc kubenswrapper[4899]: I0126 21:22:48.464488 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rzmz2/must-gather-hdb56" Jan 26 21:22:48 crc kubenswrapper[4899]: I0126 21:22:48.624477 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c052e247-0e73-40f2-a41c-96e408983b75-must-gather-output\") pod \"c052e247-0e73-40f2-a41c-96e408983b75\" (UID: \"c052e247-0e73-40f2-a41c-96e408983b75\") " Jan 26 21:22:48 crc kubenswrapper[4899]: I0126 21:22:48.624538 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mjpp\" (UniqueName: \"kubernetes.io/projected/c052e247-0e73-40f2-a41c-96e408983b75-kube-api-access-4mjpp\") pod \"c052e247-0e73-40f2-a41c-96e408983b75\" (UID: \"c052e247-0e73-40f2-a41c-96e408983b75\") " Jan 26 21:22:48 crc kubenswrapper[4899]: I0126 21:22:48.630568 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c052e247-0e73-40f2-a41c-96e408983b75-kube-api-access-4mjpp" (OuterVolumeSpecName: "kube-api-access-4mjpp") pod "c052e247-0e73-40f2-a41c-96e408983b75" (UID: "c052e247-0e73-40f2-a41c-96e408983b75"). InnerVolumeSpecName "kube-api-access-4mjpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:22:48 crc kubenswrapper[4899]: I0126 21:22:48.697279 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c052e247-0e73-40f2-a41c-96e408983b75-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c052e247-0e73-40f2-a41c-96e408983b75" (UID: "c052e247-0e73-40f2-a41c-96e408983b75"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:22:48 crc kubenswrapper[4899]: I0126 21:22:48.727172 4899 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c052e247-0e73-40f2-a41c-96e408983b75-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 21:22:48 crc kubenswrapper[4899]: I0126 21:22:48.727243 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mjpp\" (UniqueName: \"kubernetes.io/projected/c052e247-0e73-40f2-a41c-96e408983b75-kube-api-access-4mjpp\") on node \"crc\" DevicePath \"\"" Jan 26 21:22:48 crc kubenswrapper[4899]: I0126 21:22:48.938374 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c052e247-0e73-40f2-a41c-96e408983b75" path="/var/lib/kubelet/pods/c052e247-0e73-40f2-a41c-96e408983b75/volumes" Jan 26 21:22:49 crc kubenswrapper[4899]: I0126 21:22:49.434150 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rzmz2_must-gather-hdb56_c052e247-0e73-40f2-a41c-96e408983b75/copy/0.log" Jan 26 21:22:49 crc kubenswrapper[4899]: I0126 21:22:49.434503 4899 scope.go:117] "RemoveContainer" containerID="55adbc267ebd6e69b2e141a89aa38155529aaa677633ad0f73020490762d99cd" Jan 26 21:22:49 crc kubenswrapper[4899]: I0126 21:22:49.434572 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rzmz2/must-gather-hdb56" Jan 26 21:22:49 crc kubenswrapper[4899]: I0126 21:22:49.453518 4899 scope.go:117] "RemoveContainer" containerID="e729f17e8ce1bde998d6dc9f582f88ef368b3c612b5920f007a6ee735cd3465c" Jan 26 21:23:00 crc kubenswrapper[4899]: I0126 21:23:00.934176 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:23:00 crc kubenswrapper[4899]: E0126 21:23:00.934940 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.190460 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pvnkp"] Jan 26 21:23:06 crc kubenswrapper[4899]: E0126 21:23:06.190987 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335d2162-69b9-448c-af07-d0df93cdf597" containerName="extract-content" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.191006 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="335d2162-69b9-448c-af07-d0df93cdf597" containerName="extract-content" Jan 26 21:23:06 crc kubenswrapper[4899]: E0126 21:23:06.191019 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c052e247-0e73-40f2-a41c-96e408983b75" containerName="copy" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.191027 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="c052e247-0e73-40f2-a41c-96e408983b75" containerName="copy" Jan 26 21:23:06 crc kubenswrapper[4899]: E0126 21:23:06.191049 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335d2162-69b9-448c-af07-d0df93cdf597" containerName="extract-utilities" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.191059 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="335d2162-69b9-448c-af07-d0df93cdf597" containerName="extract-utilities" Jan 26 21:23:06 crc kubenswrapper[4899]: E0126 21:23:06.191072 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6a4fd4-0f21-43b7-a94a-13b506122741" containerName="extract-content" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.191101 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6a4fd4-0f21-43b7-a94a-13b506122741" containerName="extract-content" Jan 26 21:23:06 crc kubenswrapper[4899]: E0126 21:23:06.191114 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6a4fd4-0f21-43b7-a94a-13b506122741" containerName="registry-server" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.191124 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6a4fd4-0f21-43b7-a94a-13b506122741" containerName="registry-server" Jan 26 21:23:06 crc kubenswrapper[4899]: E0126 21:23:06.191141 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335d2162-69b9-448c-af07-d0df93cdf597" containerName="registry-server" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.191149 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="335d2162-69b9-448c-af07-d0df93cdf597" containerName="registry-server" Jan 26 21:23:06 crc kubenswrapper[4899]: E0126 21:23:06.191162 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c052e247-0e73-40f2-a41c-96e408983b75" containerName="gather" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.191170 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="c052e247-0e73-40f2-a41c-96e408983b75" containerName="gather" Jan 26 21:23:06 crc kubenswrapper[4899]: E0126 21:23:06.191181 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6a4fd4-0f21-43b7-a94a-13b506122741" containerName="extract-utilities" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.191188 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6a4fd4-0f21-43b7-a94a-13b506122741" containerName="extract-utilities" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.191290 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="335d2162-69b9-448c-af07-d0df93cdf597" containerName="registry-server" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.191300 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="c052e247-0e73-40f2-a41c-96e408983b75" containerName="gather" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.191309 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d6a4fd4-0f21-43b7-a94a-13b506122741" containerName="registry-server" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.191316 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="c052e247-0e73-40f2-a41c-96e408983b75" containerName="copy" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.192130 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.204378 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pvnkp"] Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.369263 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdq2n\" (UniqueName: \"kubernetes.io/projected/849ee1a1-b53b-4bed-ab88-0c30569ef81d-kube-api-access-vdq2n\") pod \"certified-operators-pvnkp\" (UID: \"849ee1a1-b53b-4bed-ab88-0c30569ef81d\") " pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.369332 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/849ee1a1-b53b-4bed-ab88-0c30569ef81d-utilities\") pod \"certified-operators-pvnkp\" (UID: \"849ee1a1-b53b-4bed-ab88-0c30569ef81d\") " pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.369369 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/849ee1a1-b53b-4bed-ab88-0c30569ef81d-catalog-content\") pod \"certified-operators-pvnkp\" (UID: \"849ee1a1-b53b-4bed-ab88-0c30569ef81d\") " pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.470542 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/849ee1a1-b53b-4bed-ab88-0c30569ef81d-utilities\") pod \"certified-operators-pvnkp\" (UID: \"849ee1a1-b53b-4bed-ab88-0c30569ef81d\") " pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.470606 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/849ee1a1-b53b-4bed-ab88-0c30569ef81d-catalog-content\") pod \"certified-operators-pvnkp\" (UID: \"849ee1a1-b53b-4bed-ab88-0c30569ef81d\") " pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.470705 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdq2n\" (UniqueName: \"kubernetes.io/projected/849ee1a1-b53b-4bed-ab88-0c30569ef81d-kube-api-access-vdq2n\") pod \"certified-operators-pvnkp\" (UID: \"849ee1a1-b53b-4bed-ab88-0c30569ef81d\") " pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.471194 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/849ee1a1-b53b-4bed-ab88-0c30569ef81d-utilities\") pod \"certified-operators-pvnkp\" (UID: \"849ee1a1-b53b-4bed-ab88-0c30569ef81d\") " pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.471513 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/849ee1a1-b53b-4bed-ab88-0c30569ef81d-catalog-content\") pod \"certified-operators-pvnkp\" (UID: \"849ee1a1-b53b-4bed-ab88-0c30569ef81d\") " pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.503092 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdq2n\" (UniqueName: \"kubernetes.io/projected/849ee1a1-b53b-4bed-ab88-0c30569ef81d-kube-api-access-vdq2n\") pod \"certified-operators-pvnkp\" (UID: \"849ee1a1-b53b-4bed-ab88-0c30569ef81d\") " pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.515465 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:06 crc kubenswrapper[4899]: I0126 21:23:06.957792 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pvnkp"] Jan 26 21:23:07 crc kubenswrapper[4899]: I0126 21:23:07.572522 4899 generic.go:334] "Generic (PLEG): container finished" podID="849ee1a1-b53b-4bed-ab88-0c30569ef81d" containerID="96f5fc65f8c1bb357e669f2d65f4479a48fa848d3476c95a5d1e371aea1f2bff" exitCode=0 Jan 26 21:23:07 crc kubenswrapper[4899]: I0126 21:23:07.572569 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvnkp" event={"ID":"849ee1a1-b53b-4bed-ab88-0c30569ef81d","Type":"ContainerDied","Data":"96f5fc65f8c1bb357e669f2d65f4479a48fa848d3476c95a5d1e371aea1f2bff"} Jan 26 21:23:07 crc kubenswrapper[4899]: I0126 21:23:07.572598 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvnkp" event={"ID":"849ee1a1-b53b-4bed-ab88-0c30569ef81d","Type":"ContainerStarted","Data":"b603d135e89d433f209141f1882c61ce9226267ff1b85922701f57221c0d486f"} Jan 26 21:23:08 crc kubenswrapper[4899]: I0126 21:23:08.582500 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvnkp" event={"ID":"849ee1a1-b53b-4bed-ab88-0c30569ef81d","Type":"ContainerStarted","Data":"8bdf077214165b3c2895759e6e361fd113364e4d02e36c9348ebb2d83b16b80f"} Jan 26 21:23:09 crc kubenswrapper[4899]: I0126 21:23:09.589406 4899 generic.go:334] "Generic (PLEG): container finished" podID="849ee1a1-b53b-4bed-ab88-0c30569ef81d" containerID="8bdf077214165b3c2895759e6e361fd113364e4d02e36c9348ebb2d83b16b80f" exitCode=0 Jan 26 21:23:09 crc kubenswrapper[4899]: I0126 21:23:09.589467 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvnkp" event={"ID":"849ee1a1-b53b-4bed-ab88-0c30569ef81d","Type":"ContainerDied","Data":"8bdf077214165b3c2895759e6e361fd113364e4d02e36c9348ebb2d83b16b80f"} Jan 26 21:23:10 crc kubenswrapper[4899]: I0126 21:23:10.597434 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvnkp" event={"ID":"849ee1a1-b53b-4bed-ab88-0c30569ef81d","Type":"ContainerStarted","Data":"4d2b54201a4a31e642d115a2f8823fb97a76e9a89476ebbf8e5cf13305d5031c"} Jan 26 21:23:10 crc kubenswrapper[4899]: I0126 21:23:10.618209 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pvnkp" podStartSLOduration=2.190101457 podStartE2EDuration="4.618189974s" podCreationTimestamp="2026-01-26 21:23:06 +0000 UTC" firstStartedPulling="2026-01-26 21:23:07.57394681 +0000 UTC m=+1676.955534847" lastFinishedPulling="2026-01-26 21:23:10.002035337 +0000 UTC m=+1679.383623364" observedRunningTime="2026-01-26 21:23:10.615661162 +0000 UTC m=+1679.997249189" watchObservedRunningTime="2026-01-26 21:23:10.618189974 +0000 UTC m=+1679.999778001" Jan 26 21:23:12 crc kubenswrapper[4899]: I0126 21:23:12.931151 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:23:12 crc kubenswrapper[4899]: E0126 21:23:12.932405 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:23:16 crc kubenswrapper[4899]: I0126 21:23:16.516295 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:16 crc kubenswrapper[4899]: I0126 21:23:16.516355 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:16 crc kubenswrapper[4899]: I0126 21:23:16.559193 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:16 crc kubenswrapper[4899]: I0126 21:23:16.670603 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:16 crc kubenswrapper[4899]: I0126 21:23:16.785528 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pvnkp"] Jan 26 21:23:17 crc kubenswrapper[4899]: I0126 21:23:17.246120 4899 scope.go:117] "RemoveContainer" containerID="57c890e0b20b53bfa4030a0e7538ebfe9d7be9b74e610ce103dc42d5a2822a99" Jan 26 21:23:17 crc kubenswrapper[4899]: I0126 21:23:17.271307 4899 scope.go:117] "RemoveContainer" containerID="aeb53d649d1a3c83fc69fef47171a4125505527c9b41b5aaa51f7ffb156ca8ec" Jan 26 21:23:17 crc kubenswrapper[4899]: I0126 21:23:17.332327 4899 scope.go:117] "RemoveContainer" containerID="593bb6987ce9a00b2ed9419845f9cc492b60d858f9fc2e53d8e595a7bfad7f6a" Jan 26 21:23:17 crc kubenswrapper[4899]: I0126 21:23:17.349555 4899 scope.go:117] "RemoveContainer" containerID="40d36d698b3c1c2f803d20c6a0d155485bea9677d095e245325d7a38b08195b7" Jan 26 21:23:17 crc kubenswrapper[4899]: I0126 21:23:17.372079 4899 scope.go:117] "RemoveContainer" containerID="3b35976384a4da5da6b2567db096ec17dd80c593e6cacb25611c5c053239b1b7" Jan 26 21:23:17 crc kubenswrapper[4899]: I0126 21:23:17.399743 4899 scope.go:117] "RemoveContainer" containerID="09e5df032b970d5f2796f97efa127b8697b61628dd66fe2585414d0578e97cde" Jan 26 21:23:17 crc kubenswrapper[4899]: I0126 21:23:17.416306 4899 scope.go:117] "RemoveContainer" containerID="925593cf1093756ffc03515da9a5f83425874f840809e0baec352d91c434ee2b" Jan 26 21:23:17 crc kubenswrapper[4899]: I0126 21:23:17.466097 4899 scope.go:117] "RemoveContainer" containerID="2acb63eddfe0cfb8110d660fd1bf7d6e2e57e0b611230af4b427404ece33b8c3" Jan 26 21:23:18 crc kubenswrapper[4899]: I0126 21:23:18.643348 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pvnkp" podUID="849ee1a1-b53b-4bed-ab88-0c30569ef81d" containerName="registry-server" containerID="cri-o://4d2b54201a4a31e642d115a2f8823fb97a76e9a89476ebbf8e5cf13305d5031c" gracePeriod=2 Jan 26 21:23:19 crc kubenswrapper[4899]: I0126 21:23:19.652326 4899 generic.go:334] "Generic (PLEG): container finished" podID="849ee1a1-b53b-4bed-ab88-0c30569ef81d" containerID="4d2b54201a4a31e642d115a2f8823fb97a76e9a89476ebbf8e5cf13305d5031c" exitCode=0 Jan 26 21:23:19 crc kubenswrapper[4899]: I0126 21:23:19.652369 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvnkp" event={"ID":"849ee1a1-b53b-4bed-ab88-0c30569ef81d","Type":"ContainerDied","Data":"4d2b54201a4a31e642d115a2f8823fb97a76e9a89476ebbf8e5cf13305d5031c"} Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.101937 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.257549 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/849ee1a1-b53b-4bed-ab88-0c30569ef81d-catalog-content\") pod \"849ee1a1-b53b-4bed-ab88-0c30569ef81d\" (UID: \"849ee1a1-b53b-4bed-ab88-0c30569ef81d\") " Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.257627 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdq2n\" (UniqueName: \"kubernetes.io/projected/849ee1a1-b53b-4bed-ab88-0c30569ef81d-kube-api-access-vdq2n\") pod \"849ee1a1-b53b-4bed-ab88-0c30569ef81d\" (UID: \"849ee1a1-b53b-4bed-ab88-0c30569ef81d\") " Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.257688 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/849ee1a1-b53b-4bed-ab88-0c30569ef81d-utilities\") pod \"849ee1a1-b53b-4bed-ab88-0c30569ef81d\" (UID: \"849ee1a1-b53b-4bed-ab88-0c30569ef81d\") " Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.258609 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/849ee1a1-b53b-4bed-ab88-0c30569ef81d-utilities" (OuterVolumeSpecName: "utilities") pod "849ee1a1-b53b-4bed-ab88-0c30569ef81d" (UID: "849ee1a1-b53b-4bed-ab88-0c30569ef81d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.258963 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/849ee1a1-b53b-4bed-ab88-0c30569ef81d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.262835 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849ee1a1-b53b-4bed-ab88-0c30569ef81d-kube-api-access-vdq2n" (OuterVolumeSpecName: "kube-api-access-vdq2n") pod "849ee1a1-b53b-4bed-ab88-0c30569ef81d" (UID: "849ee1a1-b53b-4bed-ab88-0c30569ef81d"). InnerVolumeSpecName "kube-api-access-vdq2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.305444 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/849ee1a1-b53b-4bed-ab88-0c30569ef81d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "849ee1a1-b53b-4bed-ab88-0c30569ef81d" (UID: "849ee1a1-b53b-4bed-ab88-0c30569ef81d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.360383 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/849ee1a1-b53b-4bed-ab88-0c30569ef81d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.360456 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdq2n\" (UniqueName: \"kubernetes.io/projected/849ee1a1-b53b-4bed-ab88-0c30569ef81d-kube-api-access-vdq2n\") on node \"crc\" DevicePath \"\"" Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.660871 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvnkp" event={"ID":"849ee1a1-b53b-4bed-ab88-0c30569ef81d","Type":"ContainerDied","Data":"b603d135e89d433f209141f1882c61ce9226267ff1b85922701f57221c0d486f"} Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.660960 4899 scope.go:117] "RemoveContainer" containerID="4d2b54201a4a31e642d115a2f8823fb97a76e9a89476ebbf8e5cf13305d5031c" Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.661075 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvnkp" Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.677351 4899 scope.go:117] "RemoveContainer" containerID="8bdf077214165b3c2895759e6e361fd113364e4d02e36c9348ebb2d83b16b80f" Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.694380 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pvnkp"] Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.698872 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pvnkp"] Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.706169 4899 scope.go:117] "RemoveContainer" containerID="96f5fc65f8c1bb357e669f2d65f4479a48fa848d3476c95a5d1e371aea1f2bff" Jan 26 21:23:20 crc kubenswrapper[4899]: I0126 21:23:20.938143 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="849ee1a1-b53b-4bed-ab88-0c30569ef81d" path="/var/lib/kubelet/pods/849ee1a1-b53b-4bed-ab88-0c30569ef81d/volumes" Jan 26 21:23:24 crc kubenswrapper[4899]: I0126 21:23:24.930885 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:23:24 crc kubenswrapper[4899]: E0126 21:23:24.931392 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:23:38 crc kubenswrapper[4899]: I0126 21:23:38.932993 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:23:38 crc kubenswrapper[4899]: E0126 21:23:38.934291 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:23:50 crc kubenswrapper[4899]: I0126 21:23:50.932736 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:23:50 crc kubenswrapper[4899]: E0126 21:23:50.933473 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:24:03 crc kubenswrapper[4899]: I0126 21:24:03.932211 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:24:03 crc kubenswrapper[4899]: E0126 21:24:03.933007 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:24:17 crc kubenswrapper[4899]: I0126 21:24:17.575224 4899 scope.go:117] "RemoveContainer" containerID="d60060cb472e5ff0e4493f6cc8c54c7547ef9044e0027d22ea81cdc5847425a4" Jan 26 21:24:17 crc kubenswrapper[4899]: I0126 21:24:17.607337 4899 scope.go:117] "RemoveContainer" containerID="5c80a218156fd6314d1f4311caf7ea413a9c662fc8cbaf703796cfe62aabc545" Jan 26 21:24:18 crc kubenswrapper[4899]: I0126 21:24:18.931219 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:24:18 crc kubenswrapper[4899]: E0126 21:24:18.931725 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:24:29 crc kubenswrapper[4899]: I0126 21:24:29.930264 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:24:29 crc kubenswrapper[4899]: E0126 21:24:29.931085 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:24:43 crc kubenswrapper[4899]: I0126 21:24:43.930700 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:24:43 crc kubenswrapper[4899]: E0126 21:24:43.931577 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:24:57 crc kubenswrapper[4899]: I0126 21:24:57.931060 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:24:57 crc kubenswrapper[4899]: E0126 21:24:57.932074 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:25:11 crc kubenswrapper[4899]: I0126 21:25:11.931150 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:25:11 crc kubenswrapper[4899]: E0126 21:25:11.934005 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.233736 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tqmbx/must-gather-qx7kx"] Jan 26 21:25:24 crc kubenswrapper[4899]: E0126 21:25:24.234328 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849ee1a1-b53b-4bed-ab88-0c30569ef81d" containerName="registry-server" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.234342 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="849ee1a1-b53b-4bed-ab88-0c30569ef81d" containerName="registry-server" Jan 26 21:25:24 crc kubenswrapper[4899]: E0126 21:25:24.234370 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849ee1a1-b53b-4bed-ab88-0c30569ef81d" containerName="extract-utilities" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.234377 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="849ee1a1-b53b-4bed-ab88-0c30569ef81d" containerName="extract-utilities" Jan 26 21:25:24 crc kubenswrapper[4899]: E0126 21:25:24.234386 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849ee1a1-b53b-4bed-ab88-0c30569ef81d" containerName="extract-content" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.234393 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="849ee1a1-b53b-4bed-ab88-0c30569ef81d" containerName="extract-content" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.234513 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="849ee1a1-b53b-4bed-ab88-0c30569ef81d" containerName="registry-server" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.235093 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tqmbx/must-gather-qx7kx" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.238255 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tqmbx"/"kube-root-ca.crt" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.238254 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tqmbx"/"default-dockercfg-5bnx9" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.238622 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tqmbx"/"openshift-service-ca.crt" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.248503 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tqmbx/must-gather-qx7kx"] Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.372490 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9996\" (UniqueName: \"kubernetes.io/projected/47f3d4c7-492b-4d28-99fd-cda2480569ab-kube-api-access-v9996\") pod \"must-gather-qx7kx\" (UID: \"47f3d4c7-492b-4d28-99fd-cda2480569ab\") " pod="openshift-must-gather-tqmbx/must-gather-qx7kx" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.372538 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47f3d4c7-492b-4d28-99fd-cda2480569ab-must-gather-output\") pod \"must-gather-qx7kx\" (UID: \"47f3d4c7-492b-4d28-99fd-cda2480569ab\") " pod="openshift-must-gather-tqmbx/must-gather-qx7kx" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.473852 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9996\" (UniqueName: \"kubernetes.io/projected/47f3d4c7-492b-4d28-99fd-cda2480569ab-kube-api-access-v9996\") pod \"must-gather-qx7kx\" (UID: \"47f3d4c7-492b-4d28-99fd-cda2480569ab\") " pod="openshift-must-gather-tqmbx/must-gather-qx7kx" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.473907 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47f3d4c7-492b-4d28-99fd-cda2480569ab-must-gather-output\") pod \"must-gather-qx7kx\" (UID: \"47f3d4c7-492b-4d28-99fd-cda2480569ab\") " pod="openshift-must-gather-tqmbx/must-gather-qx7kx" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.474370 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47f3d4c7-492b-4d28-99fd-cda2480569ab-must-gather-output\") pod \"must-gather-qx7kx\" (UID: \"47f3d4c7-492b-4d28-99fd-cda2480569ab\") " pod="openshift-must-gather-tqmbx/must-gather-qx7kx" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.493470 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9996\" (UniqueName: \"kubernetes.io/projected/47f3d4c7-492b-4d28-99fd-cda2480569ab-kube-api-access-v9996\") pod \"must-gather-qx7kx\" (UID: \"47f3d4c7-492b-4d28-99fd-cda2480569ab\") " pod="openshift-must-gather-tqmbx/must-gather-qx7kx" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.558085 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tqmbx/must-gather-qx7kx" Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.855551 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tqmbx/must-gather-qx7kx"] Jan 26 21:25:24 crc kubenswrapper[4899]: I0126 21:25:24.932049 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:25:24 crc kubenswrapper[4899]: E0126 21:25:24.932598 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:25:25 crc kubenswrapper[4899]: I0126 21:25:25.505096 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tqmbx/must-gather-qx7kx" event={"ID":"47f3d4c7-492b-4d28-99fd-cda2480569ab","Type":"ContainerStarted","Data":"5165266d2ef1f28bd04db0404ba28b49f178cf143e77a66cdbc72d850ae2dfa1"} Jan 26 21:25:25 crc kubenswrapper[4899]: I0126 21:25:25.505144 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tqmbx/must-gather-qx7kx" event={"ID":"47f3d4c7-492b-4d28-99fd-cda2480569ab","Type":"ContainerStarted","Data":"43625b3f4afb762e08d739de933c5e1d2506b2755cff50079ea7fa0c76a6941a"} Jan 26 21:25:25 crc kubenswrapper[4899]: I0126 21:25:25.505174 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tqmbx/must-gather-qx7kx" event={"ID":"47f3d4c7-492b-4d28-99fd-cda2480569ab","Type":"ContainerStarted","Data":"7692f78a9912daddd7b0eb669d394c4c20cd25165d1df3533a30b2363dc3e7e6"} Jan 26 21:25:25 crc kubenswrapper[4899]: I0126 21:25:25.526427 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tqmbx/must-gather-qx7kx" podStartSLOduration=1.526408684 podStartE2EDuration="1.526408684s" podCreationTimestamp="2026-01-26 21:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 21:25:25.520223361 +0000 UTC m=+1814.901811418" watchObservedRunningTime="2026-01-26 21:25:25.526408684 +0000 UTC m=+1814.907996721" Jan 26 21:25:39 crc kubenswrapper[4899]: I0126 21:25:39.931348 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:25:39 crc kubenswrapper[4899]: E0126 21:25:39.932812 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:25:50 crc kubenswrapper[4899]: I0126 21:25:50.934021 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:25:50 crc kubenswrapper[4899]: E0126 21:25:50.934728 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:26:03 crc kubenswrapper[4899]: I0126 21:26:03.931122 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:26:03 crc kubenswrapper[4899]: E0126 21:26:03.932006 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:26:12 crc kubenswrapper[4899]: I0126 21:26:12.582811 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5kjdm_7ce93bf9-c281-45cc-9697-8a6a8eb9d6e0/control-plane-machine-set-operator/0.log" Jan 26 21:26:12 crc kubenswrapper[4899]: I0126 21:26:12.748292 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lxbfv_53f1cb30-6429-4ebc-8301-5f1de3e70611/kube-rbac-proxy/0.log" Jan 26 21:26:12 crc kubenswrapper[4899]: I0126 21:26:12.777218 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lxbfv_53f1cb30-6429-4ebc-8301-5f1de3e70611/machine-api-operator/0.log" Jan 26 21:26:14 crc kubenswrapper[4899]: I0126 21:26:14.931191 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:26:14 crc kubenswrapper[4899]: E0126 21:26:14.931663 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:26:28 crc kubenswrapper[4899]: I0126 21:26:28.930268 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:26:28 crc kubenswrapper[4899]: E0126 21:26:28.930961 4899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwvzr_openshift-machine-config-operator(af2334b6-f4a1-489a-acb2-0ddef342559d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" Jan 26 21:26:39 crc kubenswrapper[4899]: I0126 21:26:39.953314 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-k5x85_887bd990-cb6d-4f69-bcf2-cf642b2c165b/kube-rbac-proxy/0.log" Jan 26 21:26:40 crc kubenswrapper[4899]: I0126 21:26:40.017535 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-k5x85_887bd990-cb6d-4f69-bcf2-cf642b2c165b/controller/0.log" Jan 26 21:26:40 crc kubenswrapper[4899]: I0126 21:26:40.203829 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-frr-files/0.log" Jan 26 21:26:40 crc kubenswrapper[4899]: I0126 21:26:40.364574 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-frr-files/0.log" Jan 26 21:26:40 crc kubenswrapper[4899]: I0126 21:26:40.402630 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-metrics/0.log" Jan 26 21:26:40 crc kubenswrapper[4899]: I0126 21:26:40.402987 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-reloader/0.log" Jan 26 21:26:40 crc kubenswrapper[4899]: I0126 21:26:40.419716 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-reloader/0.log" Jan 26 21:26:40 crc kubenswrapper[4899]: I0126 21:26:40.610158 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-frr-files/0.log" Jan 26 21:26:40 crc kubenswrapper[4899]: I0126 21:26:40.655218 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-reloader/0.log" Jan 26 21:26:40 crc kubenswrapper[4899]: I0126 21:26:40.660789 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-metrics/0.log" Jan 26 21:26:40 crc kubenswrapper[4899]: I0126 21:26:40.676243 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-metrics/0.log" Jan 26 21:26:40 crc kubenswrapper[4899]: I0126 21:26:40.836280 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-frr-files/0.log" Jan 26 21:26:40 crc kubenswrapper[4899]: I0126 21:26:40.840677 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-metrics/0.log" Jan 26 21:26:40 crc kubenswrapper[4899]: I0126 21:26:40.866223 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/cp-reloader/0.log" Jan 26 21:26:40 crc kubenswrapper[4899]: I0126 21:26:40.869358 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/controller/0.log" Jan 26 21:26:41 crc kubenswrapper[4899]: I0126 21:26:41.025406 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/frr-metrics/0.log" Jan 26 21:26:41 crc kubenswrapper[4899]: I0126 21:26:41.059992 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/kube-rbac-proxy-frr/0.log" Jan 26 21:26:41 crc kubenswrapper[4899]: I0126 21:26:41.108208 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/kube-rbac-proxy/0.log" Jan 26 21:26:41 crc kubenswrapper[4899]: I0126 21:26:41.258208 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/reloader/0.log" Jan 26 21:26:41 crc kubenswrapper[4899]: I0126 21:26:41.355903 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-5kknz_2c74cccf-4954-447b-90d6-438a41878caa/frr-k8s-webhook-server/0.log" Jan 26 21:26:41 crc kubenswrapper[4899]: I0126 21:26:41.536792 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-78b88669b5-qgw6p_65a48fb2-a892-4d8e-96ba-7fee5747d2f3/manager/0.log" Jan 26 21:26:41 crc kubenswrapper[4899]: I0126 21:26:41.568311 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t97hl_aa46d965-a136-4e45-bee6-e5a64dc763f5/frr/0.log" Jan 26 21:26:41 crc kubenswrapper[4899]: I0126 21:26:41.666422 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-d9559955b-jj9n5_7c68eca2-a2e7-4a3c-b614-6e8104b2b0ec/webhook-server/0.log" Jan 26 21:26:41 crc kubenswrapper[4899]: I0126 21:26:41.741578 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ql4jc_5aede76a-7f3b-4b2d-827f-5aae59a3a65f/kube-rbac-proxy/0.log" Jan 26 21:26:41 crc kubenswrapper[4899]: I0126 21:26:41.870365 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ql4jc_5aede76a-7f3b-4b2d-827f-5aae59a3a65f/speaker/0.log" Jan 26 21:26:41 crc kubenswrapper[4899]: I0126 21:26:41.931176 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:26:43 crc kubenswrapper[4899]: I0126 21:26:43.091602 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerStarted","Data":"14e2d10e5863effcba984e46bb902fa0a7ce2ebbfc38ab56141acb4fe0b7fccc"} Jan 26 21:27:04 crc kubenswrapper[4899]: I0126 21:27:04.891471 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9_91a627e5-d605-4e13-bec3-0bdfa43e0a72/util/0.log" Jan 26 21:27:05 crc kubenswrapper[4899]: I0126 21:27:05.071571 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9_91a627e5-d605-4e13-bec3-0bdfa43e0a72/pull/0.log" Jan 26 21:27:05 crc kubenswrapper[4899]: I0126 21:27:05.089642 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9_91a627e5-d605-4e13-bec3-0bdfa43e0a72/pull/0.log" Jan 26 21:27:05 crc kubenswrapper[4899]: I0126 21:27:05.119551 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9_91a627e5-d605-4e13-bec3-0bdfa43e0a72/util/0.log" Jan 26 21:27:05 crc kubenswrapper[4899]: I0126 21:27:05.272891 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9_91a627e5-d605-4e13-bec3-0bdfa43e0a72/extract/0.log" Jan 26 21:27:05 crc kubenswrapper[4899]: I0126 21:27:05.301866 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9_91a627e5-d605-4e13-bec3-0bdfa43e0a72/util/0.log" Jan 26 21:27:05 crc kubenswrapper[4899]: I0126 21:27:05.348207 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn9px9_91a627e5-d605-4e13-bec3-0bdfa43e0a72/pull/0.log" Jan 26 21:27:05 crc kubenswrapper[4899]: I0126 21:27:05.451007 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6tzdt_3f877954-92f6-484c-a96e-388422e23f27/extract-utilities/0.log" Jan 26 21:27:05 crc kubenswrapper[4899]: I0126 21:27:05.599008 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6tzdt_3f877954-92f6-484c-a96e-388422e23f27/extract-utilities/0.log" Jan 26 21:27:05 crc kubenswrapper[4899]: I0126 21:27:05.721600 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6tzdt_3f877954-92f6-484c-a96e-388422e23f27/extract-content/0.log" Jan 26 21:27:05 crc kubenswrapper[4899]: I0126 21:27:05.731225 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6tzdt_3f877954-92f6-484c-a96e-388422e23f27/extract-content/0.log" Jan 26 21:27:05 crc kubenswrapper[4899]: I0126 21:27:05.942953 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6tzdt_3f877954-92f6-484c-a96e-388422e23f27/extract-utilities/0.log" Jan 26 21:27:05 crc kubenswrapper[4899]: I0126 21:27:05.953003 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6tzdt_3f877954-92f6-484c-a96e-388422e23f27/extract-content/0.log" Jan 26 21:27:06 crc kubenswrapper[4899]: I0126 21:27:06.160257 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jkhqw_6215d320-2289-4e53-9c43-466c52516a43/extract-utilities/0.log" Jan 26 21:27:06 crc kubenswrapper[4899]: I0126 21:27:06.320451 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6tzdt_3f877954-92f6-484c-a96e-388422e23f27/registry-server/0.log" Jan 26 21:27:06 crc kubenswrapper[4899]: I0126 21:27:06.355314 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jkhqw_6215d320-2289-4e53-9c43-466c52516a43/extract-utilities/0.log" Jan 26 21:27:06 crc kubenswrapper[4899]: I0126 21:27:06.366302 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jkhqw_6215d320-2289-4e53-9c43-466c52516a43/extract-content/0.log" Jan 26 21:27:06 crc kubenswrapper[4899]: I0126 21:27:06.395604 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jkhqw_6215d320-2289-4e53-9c43-466c52516a43/extract-content/0.log" Jan 26 21:27:06 crc kubenswrapper[4899]: I0126 21:27:06.584684 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jkhqw_6215d320-2289-4e53-9c43-466c52516a43/extract-utilities/0.log" Jan 26 21:27:06 crc kubenswrapper[4899]: I0126 21:27:06.628905 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jkhqw_6215d320-2289-4e53-9c43-466c52516a43/extract-content/0.log" Jan 26 21:27:06 crc kubenswrapper[4899]: I0126 21:27:06.787521 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fqdv9_6c65153e-2169-4842-9a1c-60b0e20f4255/marketplace-operator/0.log" Jan 26 21:27:06 crc kubenswrapper[4899]: I0126 21:27:06.902642 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9m2hr_67ff9111-7a25-4b47-adb6-4e765311e6d9/extract-utilities/0.log" Jan 26 21:27:06 crc kubenswrapper[4899]: I0126 21:27:06.968125 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jkhqw_6215d320-2289-4e53-9c43-466c52516a43/registry-server/0.log" Jan 26 21:27:07 crc kubenswrapper[4899]: I0126 21:27:07.116193 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9m2hr_67ff9111-7a25-4b47-adb6-4e765311e6d9/extract-content/0.log" Jan 26 21:27:07 crc kubenswrapper[4899]: I0126 21:27:07.181121 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9m2hr_67ff9111-7a25-4b47-adb6-4e765311e6d9/extract-utilities/0.log" Jan 26 21:27:07 crc kubenswrapper[4899]: I0126 21:27:07.198605 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9m2hr_67ff9111-7a25-4b47-adb6-4e765311e6d9/extract-content/0.log" Jan 26 21:27:07 crc kubenswrapper[4899]: I0126 21:27:07.335834 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9m2hr_67ff9111-7a25-4b47-adb6-4e765311e6d9/extract-content/0.log" Jan 26 21:27:07 crc kubenswrapper[4899]: I0126 21:27:07.378639 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9m2hr_67ff9111-7a25-4b47-adb6-4e765311e6d9/extract-utilities/0.log" Jan 26 21:27:07 crc kubenswrapper[4899]: I0126 21:27:07.438598 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9m2hr_67ff9111-7a25-4b47-adb6-4e765311e6d9/registry-server/0.log" Jan 26 21:27:07 crc kubenswrapper[4899]: I0126 21:27:07.562231 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkn2z_01adb97d-6f07-4768-a883-fbcf0a1777ff/extract-utilities/0.log" Jan 26 21:27:07 crc kubenswrapper[4899]: I0126 21:27:07.736165 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkn2z_01adb97d-6f07-4768-a883-fbcf0a1777ff/extract-content/0.log" Jan 26 21:27:07 crc kubenswrapper[4899]: I0126 21:27:07.765143 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkn2z_01adb97d-6f07-4768-a883-fbcf0a1777ff/extract-utilities/0.log" Jan 26 21:27:07 crc kubenswrapper[4899]: I0126 21:27:07.823518 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkn2z_01adb97d-6f07-4768-a883-fbcf0a1777ff/extract-content/0.log" Jan 26 21:27:08 crc kubenswrapper[4899]: I0126 21:27:08.044677 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkn2z_01adb97d-6f07-4768-a883-fbcf0a1777ff/extract-content/0.log" Jan 26 21:27:08 crc kubenswrapper[4899]: I0126 21:27:08.135199 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkn2z_01adb97d-6f07-4768-a883-fbcf0a1777ff/extract-utilities/0.log" Jan 26 21:27:08 crc kubenswrapper[4899]: I0126 21:27:08.234796 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkn2z_01adb97d-6f07-4768-a883-fbcf0a1777ff/registry-server/0.log" Jan 26 21:28:28 crc kubenswrapper[4899]: I0126 21:28:28.710504 4899 generic.go:334] "Generic (PLEG): container finished" podID="47f3d4c7-492b-4d28-99fd-cda2480569ab" containerID="43625b3f4afb762e08d739de933c5e1d2506b2755cff50079ea7fa0c76a6941a" exitCode=0 Jan 26 21:28:28 crc kubenswrapper[4899]: I0126 21:28:28.710707 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tqmbx/must-gather-qx7kx" event={"ID":"47f3d4c7-492b-4d28-99fd-cda2480569ab","Type":"ContainerDied","Data":"43625b3f4afb762e08d739de933c5e1d2506b2755cff50079ea7fa0c76a6941a"} Jan 26 21:28:28 crc kubenswrapper[4899]: I0126 21:28:28.713071 4899 scope.go:117] "RemoveContainer" containerID="43625b3f4afb762e08d739de933c5e1d2506b2755cff50079ea7fa0c76a6941a" Jan 26 21:28:29 crc kubenswrapper[4899]: I0126 21:28:29.155730 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tqmbx_must-gather-qx7kx_47f3d4c7-492b-4d28-99fd-cda2480569ab/gather/0.log" Jan 26 21:28:38 crc kubenswrapper[4899]: I0126 21:28:38.791339 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tqmbx/must-gather-qx7kx"] Jan 26 21:28:38 crc kubenswrapper[4899]: I0126 21:28:38.792189 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tqmbx/must-gather-qx7kx" podUID="47f3d4c7-492b-4d28-99fd-cda2480569ab" containerName="copy" containerID="cri-o://5165266d2ef1f28bd04db0404ba28b49f178cf143e77a66cdbc72d850ae2dfa1" gracePeriod=2 Jan 26 21:28:38 crc kubenswrapper[4899]: I0126 21:28:38.805678 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tqmbx/must-gather-qx7kx"] Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.633319 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tqmbx_must-gather-qx7kx_47f3d4c7-492b-4d28-99fd-cda2480569ab/copy/0.log" Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.634105 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tqmbx/must-gather-qx7kx" Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.786153 4899 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tqmbx_must-gather-qx7kx_47f3d4c7-492b-4d28-99fd-cda2480569ab/copy/0.log" Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.786689 4899 generic.go:334] "Generic (PLEG): container finished" podID="47f3d4c7-492b-4d28-99fd-cda2480569ab" containerID="5165266d2ef1f28bd04db0404ba28b49f178cf143e77a66cdbc72d850ae2dfa1" exitCode=143 Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.786751 4899 scope.go:117] "RemoveContainer" containerID="5165266d2ef1f28bd04db0404ba28b49f178cf143e77a66cdbc72d850ae2dfa1" Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.786759 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tqmbx/must-gather-qx7kx" Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.787028 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47f3d4c7-492b-4d28-99fd-cda2480569ab-must-gather-output\") pod \"47f3d4c7-492b-4d28-99fd-cda2480569ab\" (UID: \"47f3d4c7-492b-4d28-99fd-cda2480569ab\") " Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.787224 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9996\" (UniqueName: \"kubernetes.io/projected/47f3d4c7-492b-4d28-99fd-cda2480569ab-kube-api-access-v9996\") pod \"47f3d4c7-492b-4d28-99fd-cda2480569ab\" (UID: \"47f3d4c7-492b-4d28-99fd-cda2480569ab\") " Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.792586 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47f3d4c7-492b-4d28-99fd-cda2480569ab-kube-api-access-v9996" (OuterVolumeSpecName: "kube-api-access-v9996") pod "47f3d4c7-492b-4d28-99fd-cda2480569ab" (UID: "47f3d4c7-492b-4d28-99fd-cda2480569ab"). InnerVolumeSpecName "kube-api-access-v9996". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.822220 4899 scope.go:117] "RemoveContainer" containerID="43625b3f4afb762e08d739de933c5e1d2506b2755cff50079ea7fa0c76a6941a" Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.859399 4899 scope.go:117] "RemoveContainer" containerID="5165266d2ef1f28bd04db0404ba28b49f178cf143e77a66cdbc72d850ae2dfa1" Jan 26 21:28:39 crc kubenswrapper[4899]: E0126 21:28:39.859911 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5165266d2ef1f28bd04db0404ba28b49f178cf143e77a66cdbc72d850ae2dfa1\": container with ID starting with 5165266d2ef1f28bd04db0404ba28b49f178cf143e77a66cdbc72d850ae2dfa1 not found: ID does not exist" containerID="5165266d2ef1f28bd04db0404ba28b49f178cf143e77a66cdbc72d850ae2dfa1" Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.859975 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5165266d2ef1f28bd04db0404ba28b49f178cf143e77a66cdbc72d850ae2dfa1"} err="failed to get container status \"5165266d2ef1f28bd04db0404ba28b49f178cf143e77a66cdbc72d850ae2dfa1\": rpc error: code = NotFound desc = could not find container \"5165266d2ef1f28bd04db0404ba28b49f178cf143e77a66cdbc72d850ae2dfa1\": container with ID starting with 5165266d2ef1f28bd04db0404ba28b49f178cf143e77a66cdbc72d850ae2dfa1 not found: ID does not exist" Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.860004 4899 scope.go:117] "RemoveContainer" containerID="43625b3f4afb762e08d739de933c5e1d2506b2755cff50079ea7fa0c76a6941a" Jan 26 21:28:39 crc kubenswrapper[4899]: E0126 21:28:39.860508 4899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43625b3f4afb762e08d739de933c5e1d2506b2755cff50079ea7fa0c76a6941a\": container with ID starting with 43625b3f4afb762e08d739de933c5e1d2506b2755cff50079ea7fa0c76a6941a not found: ID does not exist" containerID="43625b3f4afb762e08d739de933c5e1d2506b2755cff50079ea7fa0c76a6941a" Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.860588 4899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43625b3f4afb762e08d739de933c5e1d2506b2755cff50079ea7fa0c76a6941a"} err="failed to get container status \"43625b3f4afb762e08d739de933c5e1d2506b2755cff50079ea7fa0c76a6941a\": rpc error: code = NotFound desc = could not find container \"43625b3f4afb762e08d739de933c5e1d2506b2755cff50079ea7fa0c76a6941a\": container with ID starting with 43625b3f4afb762e08d739de933c5e1d2506b2755cff50079ea7fa0c76a6941a not found: ID does not exist" Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.865497 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47f3d4c7-492b-4d28-99fd-cda2480569ab-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "47f3d4c7-492b-4d28-99fd-cda2480569ab" (UID: "47f3d4c7-492b-4d28-99fd-cda2480569ab"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.888253 4899 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47f3d4c7-492b-4d28-99fd-cda2480569ab-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 21:28:39 crc kubenswrapper[4899]: I0126 21:28:39.888285 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9996\" (UniqueName: \"kubernetes.io/projected/47f3d4c7-492b-4d28-99fd-cda2480569ab-kube-api-access-v9996\") on node \"crc\" DevicePath \"\"" Jan 26 21:28:40 crc kubenswrapper[4899]: I0126 21:28:40.938916 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47f3d4c7-492b-4d28-99fd-cda2480569ab" path="/var/lib/kubelet/pods/47f3d4c7-492b-4d28-99fd-cda2480569ab/volumes" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.045168 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g96hm"] Jan 26 21:28:52 crc kubenswrapper[4899]: E0126 21:28:52.047024 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47f3d4c7-492b-4d28-99fd-cda2480569ab" containerName="gather" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.047125 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="47f3d4c7-492b-4d28-99fd-cda2480569ab" containerName="gather" Jan 26 21:28:52 crc kubenswrapper[4899]: E0126 21:28:52.047210 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47f3d4c7-492b-4d28-99fd-cda2480569ab" containerName="copy" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.047308 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="47f3d4c7-492b-4d28-99fd-cda2480569ab" containerName="copy" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.047530 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="47f3d4c7-492b-4d28-99fd-cda2480569ab" containerName="copy" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.047613 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="47f3d4c7-492b-4d28-99fd-cda2480569ab" containerName="gather" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.051870 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.055377 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g96hm"] Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.166025 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phlqs\" (UniqueName: \"kubernetes.io/projected/b7018469-0a1b-4f53-9041-60770ed5120e-kube-api-access-phlqs\") pod \"redhat-operators-g96hm\" (UID: \"b7018469-0a1b-4f53-9041-60770ed5120e\") " pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.166085 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7018469-0a1b-4f53-9041-60770ed5120e-catalog-content\") pod \"redhat-operators-g96hm\" (UID: \"b7018469-0a1b-4f53-9041-60770ed5120e\") " pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.166117 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7018469-0a1b-4f53-9041-60770ed5120e-utilities\") pod \"redhat-operators-g96hm\" (UID: \"b7018469-0a1b-4f53-9041-60770ed5120e\") " pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.266992 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7018469-0a1b-4f53-9041-60770ed5120e-catalog-content\") pod \"redhat-operators-g96hm\" (UID: \"b7018469-0a1b-4f53-9041-60770ed5120e\") " pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.267057 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7018469-0a1b-4f53-9041-60770ed5120e-utilities\") pod \"redhat-operators-g96hm\" (UID: \"b7018469-0a1b-4f53-9041-60770ed5120e\") " pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.267123 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phlqs\" (UniqueName: \"kubernetes.io/projected/b7018469-0a1b-4f53-9041-60770ed5120e-kube-api-access-phlqs\") pod \"redhat-operators-g96hm\" (UID: \"b7018469-0a1b-4f53-9041-60770ed5120e\") " pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.268056 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7018469-0a1b-4f53-9041-60770ed5120e-catalog-content\") pod \"redhat-operators-g96hm\" (UID: \"b7018469-0a1b-4f53-9041-60770ed5120e\") " pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.268287 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7018469-0a1b-4f53-9041-60770ed5120e-utilities\") pod \"redhat-operators-g96hm\" (UID: \"b7018469-0a1b-4f53-9041-60770ed5120e\") " pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.292281 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phlqs\" (UniqueName: \"kubernetes.io/projected/b7018469-0a1b-4f53-9041-60770ed5120e-kube-api-access-phlqs\") pod \"redhat-operators-g96hm\" (UID: \"b7018469-0a1b-4f53-9041-60770ed5120e\") " pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.375200 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.591082 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g96hm"] Jan 26 21:28:52 crc kubenswrapper[4899]: W0126 21:28:52.598990 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7018469_0a1b_4f53_9041_60770ed5120e.slice/crio-03a6492c7dd75ecf762df61efad9f1db3527b9a5f09b82a52108868ff7ab084c WatchSource:0}: Error finding container 03a6492c7dd75ecf762df61efad9f1db3527b9a5f09b82a52108868ff7ab084c: Status 404 returned error can't find the container with id 03a6492c7dd75ecf762df61efad9f1db3527b9a5f09b82a52108868ff7ab084c Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.880823 4899 generic.go:334] "Generic (PLEG): container finished" podID="b7018469-0a1b-4f53-9041-60770ed5120e" containerID="5f691595b3e44900f8e2d15f5bbad45898c15382e84a3f3bf24735fe6cc84fcf" exitCode=0 Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.880888 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g96hm" event={"ID":"b7018469-0a1b-4f53-9041-60770ed5120e","Type":"ContainerDied","Data":"5f691595b3e44900f8e2d15f5bbad45898c15382e84a3f3bf24735fe6cc84fcf"} Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.880960 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g96hm" event={"ID":"b7018469-0a1b-4f53-9041-60770ed5120e","Type":"ContainerStarted","Data":"03a6492c7dd75ecf762df61efad9f1db3527b9a5f09b82a52108868ff7ab084c"} Jan 26 21:28:52 crc kubenswrapper[4899]: I0126 21:28:52.882713 4899 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 21:28:53 crc kubenswrapper[4899]: I0126 21:28:53.888983 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g96hm" event={"ID":"b7018469-0a1b-4f53-9041-60770ed5120e","Type":"ContainerStarted","Data":"67f52d3157ebb50060613b45a0e1ec91c3ad543602db1aa18174b959b01652ff"} Jan 26 21:28:54 crc kubenswrapper[4899]: I0126 21:28:54.897166 4899 generic.go:334] "Generic (PLEG): container finished" podID="b7018469-0a1b-4f53-9041-60770ed5120e" containerID="67f52d3157ebb50060613b45a0e1ec91c3ad543602db1aa18174b959b01652ff" exitCode=0 Jan 26 21:28:54 crc kubenswrapper[4899]: I0126 21:28:54.897341 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g96hm" event={"ID":"b7018469-0a1b-4f53-9041-60770ed5120e","Type":"ContainerDied","Data":"67f52d3157ebb50060613b45a0e1ec91c3ad543602db1aa18174b959b01652ff"} Jan 26 21:28:55 crc kubenswrapper[4899]: I0126 21:28:55.904732 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g96hm" event={"ID":"b7018469-0a1b-4f53-9041-60770ed5120e","Type":"ContainerStarted","Data":"811bcbfe5f25d94bde3705043cfcfefe723406b0ad2373b77c57f845d5cb1a1e"} Jan 26 21:28:55 crc kubenswrapper[4899]: I0126 21:28:55.923093 4899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g96hm" podStartSLOduration=1.483932016 podStartE2EDuration="3.923073769s" podCreationTimestamp="2026-01-26 21:28:52 +0000 UTC" firstStartedPulling="2026-01-26 21:28:52.882416143 +0000 UTC m=+2022.264004180" lastFinishedPulling="2026-01-26 21:28:55.321557906 +0000 UTC m=+2024.703145933" observedRunningTime="2026-01-26 21:28:55.920848596 +0000 UTC m=+2025.302436643" watchObservedRunningTime="2026-01-26 21:28:55.923073769 +0000 UTC m=+2025.304661806" Jan 26 21:29:00 crc kubenswrapper[4899]: I0126 21:29:00.109882 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:29:00 crc kubenswrapper[4899]: I0126 21:29:00.111053 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:29:02 crc kubenswrapper[4899]: I0126 21:29:02.376066 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:29:02 crc kubenswrapper[4899]: I0126 21:29:02.376119 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:29:02 crc kubenswrapper[4899]: I0126 21:29:02.457747 4899 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:29:02 crc kubenswrapper[4899]: I0126 21:29:02.993144 4899 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:29:03 crc kubenswrapper[4899]: I0126 21:29:03.036819 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g96hm"] Jan 26 21:29:04 crc kubenswrapper[4899]: I0126 21:29:04.962352 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g96hm" podUID="b7018469-0a1b-4f53-9041-60770ed5120e" containerName="registry-server" containerID="cri-o://811bcbfe5f25d94bde3705043cfcfefe723406b0ad2373b77c57f845d5cb1a1e" gracePeriod=2 Jan 26 21:29:06 crc kubenswrapper[4899]: I0126 21:29:06.974455 4899 generic.go:334] "Generic (PLEG): container finished" podID="b7018469-0a1b-4f53-9041-60770ed5120e" containerID="811bcbfe5f25d94bde3705043cfcfefe723406b0ad2373b77c57f845d5cb1a1e" exitCode=0 Jan 26 21:29:06 crc kubenswrapper[4899]: I0126 21:29:06.974642 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g96hm" event={"ID":"b7018469-0a1b-4f53-9041-60770ed5120e","Type":"ContainerDied","Data":"811bcbfe5f25d94bde3705043cfcfefe723406b0ad2373b77c57f845d5cb1a1e"} Jan 26 21:29:07 crc kubenswrapper[4899]: I0126 21:29:07.133115 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:29:07 crc kubenswrapper[4899]: I0126 21:29:07.295797 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7018469-0a1b-4f53-9041-60770ed5120e-utilities\") pod \"b7018469-0a1b-4f53-9041-60770ed5120e\" (UID: \"b7018469-0a1b-4f53-9041-60770ed5120e\") " Jan 26 21:29:07 crc kubenswrapper[4899]: I0126 21:29:07.296162 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phlqs\" (UniqueName: \"kubernetes.io/projected/b7018469-0a1b-4f53-9041-60770ed5120e-kube-api-access-phlqs\") pod \"b7018469-0a1b-4f53-9041-60770ed5120e\" (UID: \"b7018469-0a1b-4f53-9041-60770ed5120e\") " Jan 26 21:29:07 crc kubenswrapper[4899]: I0126 21:29:07.296215 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7018469-0a1b-4f53-9041-60770ed5120e-catalog-content\") pod \"b7018469-0a1b-4f53-9041-60770ed5120e\" (UID: \"b7018469-0a1b-4f53-9041-60770ed5120e\") " Jan 26 21:29:07 crc kubenswrapper[4899]: I0126 21:29:07.296885 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7018469-0a1b-4f53-9041-60770ed5120e-utilities" (OuterVolumeSpecName: "utilities") pod "b7018469-0a1b-4f53-9041-60770ed5120e" (UID: "b7018469-0a1b-4f53-9041-60770ed5120e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:29:07 crc kubenswrapper[4899]: I0126 21:29:07.302326 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7018469-0a1b-4f53-9041-60770ed5120e-kube-api-access-phlqs" (OuterVolumeSpecName: "kube-api-access-phlqs") pod "b7018469-0a1b-4f53-9041-60770ed5120e" (UID: "b7018469-0a1b-4f53-9041-60770ed5120e"). InnerVolumeSpecName "kube-api-access-phlqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:29:07 crc kubenswrapper[4899]: I0126 21:29:07.397589 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phlqs\" (UniqueName: \"kubernetes.io/projected/b7018469-0a1b-4f53-9041-60770ed5120e-kube-api-access-phlqs\") on node \"crc\" DevicePath \"\"" Jan 26 21:29:07 crc kubenswrapper[4899]: I0126 21:29:07.397617 4899 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7018469-0a1b-4f53-9041-60770ed5120e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 21:29:07 crc kubenswrapper[4899]: I0126 21:29:07.445631 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7018469-0a1b-4f53-9041-60770ed5120e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7018469-0a1b-4f53-9041-60770ed5120e" (UID: "b7018469-0a1b-4f53-9041-60770ed5120e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 21:29:07 crc kubenswrapper[4899]: I0126 21:29:07.499110 4899 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7018469-0a1b-4f53-9041-60770ed5120e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 21:29:07 crc kubenswrapper[4899]: I0126 21:29:07.983860 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g96hm" event={"ID":"b7018469-0a1b-4f53-9041-60770ed5120e","Type":"ContainerDied","Data":"03a6492c7dd75ecf762df61efad9f1db3527b9a5f09b82a52108868ff7ab084c"} Jan 26 21:29:07 crc kubenswrapper[4899]: I0126 21:29:07.983989 4899 scope.go:117] "RemoveContainer" containerID="811bcbfe5f25d94bde3705043cfcfefe723406b0ad2373b77c57f845d5cb1a1e" Jan 26 21:29:07 crc kubenswrapper[4899]: I0126 21:29:07.984303 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g96hm" Jan 26 21:29:08 crc kubenswrapper[4899]: I0126 21:29:08.002379 4899 scope.go:117] "RemoveContainer" containerID="67f52d3157ebb50060613b45a0e1ec91c3ad543602db1aa18174b959b01652ff" Jan 26 21:29:08 crc kubenswrapper[4899]: I0126 21:29:08.017656 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g96hm"] Jan 26 21:29:08 crc kubenswrapper[4899]: I0126 21:29:08.025052 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g96hm"] Jan 26 21:29:08 crc kubenswrapper[4899]: I0126 21:29:08.053385 4899 scope.go:117] "RemoveContainer" containerID="5f691595b3e44900f8e2d15f5bbad45898c15382e84a3f3bf24735fe6cc84fcf" Jan 26 21:29:08 crc kubenswrapper[4899]: I0126 21:29:08.938050 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7018469-0a1b-4f53-9041-60770ed5120e" path="/var/lib/kubelet/pods/b7018469-0a1b-4f53-9041-60770ed5120e/volumes" Jan 26 21:29:30 crc kubenswrapper[4899]: I0126 21:29:30.109626 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:29:30 crc kubenswrapper[4899]: I0126 21:29:30.110058 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.109427 4899 patch_prober.go:28] interesting pod/machine-config-daemon-wwvzr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.110096 4899 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.110174 4899 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.110847 4899 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"14e2d10e5863effcba984e46bb902fa0a7ce2ebbfc38ab56141acb4fe0b7fccc"} pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.110998 4899 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" podUID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerName="machine-config-daemon" containerID="cri-o://14e2d10e5863effcba984e46bb902fa0a7ce2ebbfc38ab56141acb4fe0b7fccc" gracePeriod=600 Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.147529 4899 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp"] Jan 26 21:30:00 crc kubenswrapper[4899]: E0126 21:30:00.149101 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7018469-0a1b-4f53-9041-60770ed5120e" containerName="registry-server" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.149239 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7018469-0a1b-4f53-9041-60770ed5120e" containerName="registry-server" Jan 26 21:30:00 crc kubenswrapper[4899]: E0126 21:30:00.149394 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7018469-0a1b-4f53-9041-60770ed5120e" containerName="extract-utilities" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.149511 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7018469-0a1b-4f53-9041-60770ed5120e" containerName="extract-utilities" Jan 26 21:30:00 crc kubenswrapper[4899]: E0126 21:30:00.150477 4899 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7018469-0a1b-4f53-9041-60770ed5120e" containerName="extract-content" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.150600 4899 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7018469-0a1b-4f53-9041-60770ed5120e" containerName="extract-content" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.150911 4899 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7018469-0a1b-4f53-9041-60770ed5120e" containerName="registry-server" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.151857 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.155370 4899 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.155593 4899 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.159608 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp"] Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.313278 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/272bc1bc-c948-46e7-bb28-fd8439348c6f-secret-volume\") pod \"collect-profiles-29491050-t8glp\" (UID: \"272bc1bc-c948-46e7-bb28-fd8439348c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.313708 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bdzh\" (UniqueName: \"kubernetes.io/projected/272bc1bc-c948-46e7-bb28-fd8439348c6f-kube-api-access-7bdzh\") pod \"collect-profiles-29491050-t8glp\" (UID: \"272bc1bc-c948-46e7-bb28-fd8439348c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.313758 4899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/272bc1bc-c948-46e7-bb28-fd8439348c6f-config-volume\") pod \"collect-profiles-29491050-t8glp\" (UID: \"272bc1bc-c948-46e7-bb28-fd8439348c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.316304 4899 generic.go:334] "Generic (PLEG): container finished" podID="af2334b6-f4a1-489a-acb2-0ddef342559d" containerID="14e2d10e5863effcba984e46bb902fa0a7ce2ebbfc38ab56141acb4fe0b7fccc" exitCode=0 Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.316346 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerDied","Data":"14e2d10e5863effcba984e46bb902fa0a7ce2ebbfc38ab56141acb4fe0b7fccc"} Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.316394 4899 scope.go:117] "RemoveContainer" containerID="1722a73ddf635e5091cb990abebef51402c2e385829017bc09a7ae6ca33c0c3d" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.416486 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/272bc1bc-c948-46e7-bb28-fd8439348c6f-secret-volume\") pod \"collect-profiles-29491050-t8glp\" (UID: \"272bc1bc-c948-46e7-bb28-fd8439348c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.416534 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bdzh\" (UniqueName: \"kubernetes.io/projected/272bc1bc-c948-46e7-bb28-fd8439348c6f-kube-api-access-7bdzh\") pod \"collect-profiles-29491050-t8glp\" (UID: \"272bc1bc-c948-46e7-bb28-fd8439348c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.416575 4899 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/272bc1bc-c948-46e7-bb28-fd8439348c6f-config-volume\") pod \"collect-profiles-29491050-t8glp\" (UID: \"272bc1bc-c948-46e7-bb28-fd8439348c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.417398 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/272bc1bc-c948-46e7-bb28-fd8439348c6f-config-volume\") pod \"collect-profiles-29491050-t8glp\" (UID: \"272bc1bc-c948-46e7-bb28-fd8439348c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.441093 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/272bc1bc-c948-46e7-bb28-fd8439348c6f-secret-volume\") pod \"collect-profiles-29491050-t8glp\" (UID: \"272bc1bc-c948-46e7-bb28-fd8439348c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.449500 4899 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bdzh\" (UniqueName: \"kubernetes.io/projected/272bc1bc-c948-46e7-bb28-fd8439348c6f-kube-api-access-7bdzh\") pod \"collect-profiles-29491050-t8glp\" (UID: \"272bc1bc-c948-46e7-bb28-fd8439348c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.475843 4899 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp" Jan 26 21:30:00 crc kubenswrapper[4899]: I0126 21:30:00.664062 4899 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp"] Jan 26 21:30:00 crc kubenswrapper[4899]: W0126 21:30:00.670854 4899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod272bc1bc_c948_46e7_bb28_fd8439348c6f.slice/crio-5c0c68795c5bdaed5c9c2fdf6ad202aaab8605c0ea0774c417c615ba3b694edb WatchSource:0}: Error finding container 5c0c68795c5bdaed5c9c2fdf6ad202aaab8605c0ea0774c417c615ba3b694edb: Status 404 returned error can't find the container with id 5c0c68795c5bdaed5c9c2fdf6ad202aaab8605c0ea0774c417c615ba3b694edb Jan 26 21:30:01 crc kubenswrapper[4899]: I0126 21:30:01.324833 4899 generic.go:334] "Generic (PLEG): container finished" podID="272bc1bc-c948-46e7-bb28-fd8439348c6f" containerID="3fe258242ebafac0c6ca52ff7bb76f2deaa0682ef996191945be11e0ebbeb017" exitCode=0 Jan 26 21:30:01 crc kubenswrapper[4899]: I0126 21:30:01.325081 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp" event={"ID":"272bc1bc-c948-46e7-bb28-fd8439348c6f","Type":"ContainerDied","Data":"3fe258242ebafac0c6ca52ff7bb76f2deaa0682ef996191945be11e0ebbeb017"} Jan 26 21:30:01 crc kubenswrapper[4899]: I0126 21:30:01.325266 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp" event={"ID":"272bc1bc-c948-46e7-bb28-fd8439348c6f","Type":"ContainerStarted","Data":"5c0c68795c5bdaed5c9c2fdf6ad202aaab8605c0ea0774c417c615ba3b694edb"} Jan 26 21:30:01 crc kubenswrapper[4899]: I0126 21:30:01.328469 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwvzr" event={"ID":"af2334b6-f4a1-489a-acb2-0ddef342559d","Type":"ContainerStarted","Data":"168061552ff946735bba8d47fa85f42dfeaa70eb966b4d271cb268e5e86d2340"} Jan 26 21:30:02 crc kubenswrapper[4899]: I0126 21:30:02.528393 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp" Jan 26 21:30:02 crc kubenswrapper[4899]: I0126 21:30:02.643711 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/272bc1bc-c948-46e7-bb28-fd8439348c6f-secret-volume\") pod \"272bc1bc-c948-46e7-bb28-fd8439348c6f\" (UID: \"272bc1bc-c948-46e7-bb28-fd8439348c6f\") " Jan 26 21:30:02 crc kubenswrapper[4899]: I0126 21:30:02.643773 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bdzh\" (UniqueName: \"kubernetes.io/projected/272bc1bc-c948-46e7-bb28-fd8439348c6f-kube-api-access-7bdzh\") pod \"272bc1bc-c948-46e7-bb28-fd8439348c6f\" (UID: \"272bc1bc-c948-46e7-bb28-fd8439348c6f\") " Jan 26 21:30:02 crc kubenswrapper[4899]: I0126 21:30:02.644464 4899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/272bc1bc-c948-46e7-bb28-fd8439348c6f-config-volume\") pod \"272bc1bc-c948-46e7-bb28-fd8439348c6f\" (UID: \"272bc1bc-c948-46e7-bb28-fd8439348c6f\") " Jan 26 21:30:02 crc kubenswrapper[4899]: I0126 21:30:02.645329 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/272bc1bc-c948-46e7-bb28-fd8439348c6f-config-volume" (OuterVolumeSpecName: "config-volume") pod "272bc1bc-c948-46e7-bb28-fd8439348c6f" (UID: "272bc1bc-c948-46e7-bb28-fd8439348c6f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 21:30:02 crc kubenswrapper[4899]: I0126 21:30:02.645549 4899 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/272bc1bc-c948-46e7-bb28-fd8439348c6f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 21:30:02 crc kubenswrapper[4899]: I0126 21:30:02.651200 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/272bc1bc-c948-46e7-bb28-fd8439348c6f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "272bc1bc-c948-46e7-bb28-fd8439348c6f" (UID: "272bc1bc-c948-46e7-bb28-fd8439348c6f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 21:30:02 crc kubenswrapper[4899]: I0126 21:30:02.651225 4899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/272bc1bc-c948-46e7-bb28-fd8439348c6f-kube-api-access-7bdzh" (OuterVolumeSpecName: "kube-api-access-7bdzh") pod "272bc1bc-c948-46e7-bb28-fd8439348c6f" (UID: "272bc1bc-c948-46e7-bb28-fd8439348c6f"). InnerVolumeSpecName "kube-api-access-7bdzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 21:30:02 crc kubenswrapper[4899]: I0126 21:30:02.746572 4899 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/272bc1bc-c948-46e7-bb28-fd8439348c6f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 21:30:02 crc kubenswrapper[4899]: I0126 21:30:02.746627 4899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bdzh\" (UniqueName: \"kubernetes.io/projected/272bc1bc-c948-46e7-bb28-fd8439348c6f-kube-api-access-7bdzh\") on node \"crc\" DevicePath \"\"" Jan 26 21:30:03 crc kubenswrapper[4899]: I0126 21:30:03.343357 4899 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp" event={"ID":"272bc1bc-c948-46e7-bb28-fd8439348c6f","Type":"ContainerDied","Data":"5c0c68795c5bdaed5c9c2fdf6ad202aaab8605c0ea0774c417c615ba3b694edb"} Jan 26 21:30:03 crc kubenswrapper[4899]: I0126 21:30:03.343396 4899 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c0c68795c5bdaed5c9c2fdf6ad202aaab8605c0ea0774c417c615ba3b694edb" Jan 26 21:30:03 crc kubenswrapper[4899]: I0126 21:30:03.343441 4899 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491050-t8glp" Jan 26 21:30:03 crc kubenswrapper[4899]: I0126 21:30:03.588739 4899 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h"] Jan 26 21:30:03 crc kubenswrapper[4899]: I0126 21:30:03.593903 4899 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491005-n4n9h"] Jan 26 21:30:04 crc kubenswrapper[4899]: I0126 21:30:04.941422 4899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55013211-6291-4060-b512-07030b99b897" path="/var/lib/kubelet/pods/55013211-6291-4060-b512-07030b99b897/volumes" Jan 26 21:30:17 crc kubenswrapper[4899]: I0126 21:30:17.753641 4899 scope.go:117] "RemoveContainer" containerID="ec62c5c02caff1012a8ddfac5f3e0ffc73a24cbcec1c93bae0f72ecf8c0067d5"